ICAART 2014 Abstracts


Area 1 - Artificial Intelligence

Full Papers
Paper Nr: 18
Title:

Online Knowledge Gradient Exploration in an Unknown Environment

Authors:

Saba Q. Yahyaa and Bernard Manderick

Abstract: We present online kernel-based LSPI (or least squares policy iteration) which is an extension of offline kernel based LSPI. Online kernel-based LSPI combines characteristics of both online LSPI and offline kernel-based LSPI to improve the convergence rate as well as the optimal policy performances of the online LSPI. Online kernel-based LSPI uses knowledge gradient policy as an exploration policy and the approximate linear dependency based kernel sparsification method to select features automatically. We compare the optimal policy performance of online kernel-based LSPI and online LSPI on 5 discrete Markov decision problems, where online kernel-based LSPI outperforms online LSPI.

Paper Nr: 20
Title:

Constructing a Non-task-oriented Dialogue Agent using Statistical Response Method and Gamification

Authors:

Michimasa Inaba, Naoyuki Iwata, Fujio Toriumi, Takatsugu Hirayama, Yu Enokibori, Kenichi Takahashi and Kenji Mase

Abstract: This paper provides a novel method for building non-task-oriented dialogue agents such as chatbots. The dialogue agent constructed using our method automatically selects a suitable utterance depending on a context from a set of candidate utterances prepared in advance. To realize automatic utterance selection, we rank the candidate utterances in order of suitability by application of a machine learning algorithm. We employed both right and wrong dialogue data to learn relative suitability to rank the utterances. Additionally, we provide a low-cost and quality-assured learning data acquisition environment using crowdsourcing and gamification. The results of an experiment using learning data obtained via the environment demonstrate that the appropriate utterance is ranked on the top in 82.6% of cases and within the top 3 at 95.0% of cases. Results show that using context information that is not used in most existing agents is necessary for appropriate responses.

Paper Nr: 21
Title:

Solving Single Vehicle Pickup and Delivery Problems with Time Windows and Capacity Constraints using Nested Monte-Carlo Search

Authors:

Stefan Edelkamp and Max Gath

Abstract: Transporting goods by courier and express services increases the service quality through short transit times and satisfies individual demands of customers. Determining the optimal route for a vehicle to satisfy transport requests while minimizing the total cost refers to the Single Vehicle Pickup and Delivery Problem. Beside time and distance objectives, in real world operations it is mandatory to consider further constraints such as time windows and the capacity of the vehicle. This paper presents a novel approach to solve Single Vehicle Pickup and Delivery Problems with time windows and capacity constraints by applying Nested Monte-Carlo Search (NMCS). NMCS is a randomized exploration technique which has successfully solved complex combinatorial search problems. To evaluate the approach, we apply benchmarks instances with up to 400 cities which have to be visited. The effects of varying the number of iterations and the search level are investigated. The results reveal, that the algorithm computes state-of-the-art solutions and is competitive with other approaches.

Paper Nr: 32
Title:

A Method for Document Image Binarization based on Histogram Matching and Repeated Contrast Enhancement

Authors:

Mattias Wahde

Abstract: In this paper, a new method for binarization of document images is introduced. During training, the method stores histograms from training images (divided into small tiles), along with the optimal binarization threshold. Training image tiles are presented in pairs, one noisy version and one clean binarized version, where the latter is used for finding the optimal binarization threshold. During use, the method considers the tiles of an image one by one. It matches the stored histograms to the histogram for the tile that is to be binarized. If a sufficiently close match is found, the tile is binarized using the corresponding threshold associated with the stored histogram. If no match is found, the contrast of the tile is slightly enhanced, and a new attempt is made. This sequence is repeated until either a match is found, or a (rare) timeout is reached. The method has been applied to a set of test images, and has been shown to outperform several comparable methods.

Paper Nr: 49
Title:

mC-ReliefF - An Extension of ReliefF for Cost-based Feature Selection

Authors:

Verónica Bolón-Canedo, Beatriz Remeseiro, Noelia Sánchez-Maroño and Amparo Alonso-Betanzos

Abstract: The proliferation of high-dimensional data in the last few years has brought a necessity to use dimensionality reduction techniques, in which feature selection is arguably the favorite one. Feature selection consists of detecting relevant features and discarding the irrelevant ones. However, there are some situations where the users are not only interested in the relevance of the selected features but also in the costs that they imply, e.g. economical or computational costs. In this paper an extension of the well-known ReliefF method for feature selection is proposed, which consists of adding a new term to the function which updates the weights of the features so as to be able to reach a trade-off between the relevance of a feature and its associated cost. The behavior of the proposed method is tested on twelve heterogeneous classification datasets as well as a real application, using a support vector machine (SVM) as a classifier. The results of the experimental study show that the approach is sound, since it allows the user to reduce the cost significantly without compromising the classification error.

Paper Nr: 52
Title:

Probabilistic Cognitive Maps - Semantics of a Cognitive Map when the Values are Assumed to be Probabilities

Authors:

Aymeric Le Dorze, Béatrice Duval, Laurent Garcia, David Genest, Philippe Leray and Stéphane Loiseau

Abstract: Cognitive maps are a knowledge representation model that describes influences between concepts by a graph, where each influence is quantified by a value. The values are generally not formally defined. In this paper, we introduce a new cognitive map model, the probabilistic cognitive maps. In such maps, the values of the influences are interpreted as probability values. We define formally the semantics of this model. We also provide an operation to compute the global influence of a concept on any other one, called the probabilistic propagated influence. To show that our model is valid, we propose a procedure to represent a probabilistic cognitive map as a Bayesian network. This new model strengthens cognitive maps by giving them strong semantics. Moreover, it acts as a bridge between cognitive maps and Bayesian networks.

Paper Nr: 56
Title:

A Faster Algorithm for Checking the Dynamic Controllability of Simple Temporal Networks with Uncertainty

Authors:

Luke Hunsberger

Abstract: A Simple Temporal Network (STN) is a structure containing time-points and temporal constraints that an agent can use to manage its activities. A Simple Temporal Network with Uncertainty (STNU) augments an STN to include contingent links that can be used to represent actions with uncertain durations. The most important property of an STNU is whether it is dynamically controllable (DC)—that is, whether there exists a strategy for executing its time-points such that all constraints will necessarily be satisfied no matter how the contingent durations happen to turn out (within their known bounds). The fastest algorithm for checking the dynamic controllability of STNUs reported in the literature so far is the O(N4)-time algorithm due to Morris. This paper presents a new DC-checking algorithm that empirical results confirm is faster than Morris’ algorithm, in many cases showing an order of magnitude speed-up. The algorithm employs two novel techniques. First, new constraints generated by propagation are immediately incorporated into the network using a technique called rotating Dijkstra. Second, a heuristic that exploits the nesting structure of certain paths in the STNU graph is used to determine a good order in which to process the contingent links during constraint propagation.

Paper Nr: 78
Title:

Knowledge Gradient for Multi-objective Multi-armed Bandit Algorithms

Authors:

Saba Q. Yahyaa, Madalina M. Drugan and Bernard Manderick

Abstract: We extend knowledge gradient (KG) policy for the multi-objective, multi-armed bandits problem to efficiently explore the Pareto optimal arms. We consider two partial order relationships to order the mean vectors, i.e. Pareto and scalarized functions. Pareto KG finds the optimal arms using Pareto search, while the scalarizations-KG transform the multi-objective arms into one-objective arm to find the optimal arms. To measure the performance of the proposed algorithms, we propose three regret measures. We compare the performance of knowledge gradient policy with UCB1 on a multi-objective multi-armed bandits problem, where KG outperforms UCB1.

Paper Nr: 80
Title:

Evolutionary Fuzzy Rule Construction for Iterative Object Segmentation

Authors:

Junji Otsuka and Tomoharu Nagao

Abstract: This paper presents Cellular Fuzzy Oriented Classifier Evolution (CFORCE), a generic method for constructing fuzzy rules to divide an image into two segments: object and background. In CFORCE, a pair of fuzzy classification rule sets for object and background is defined as a processing unit, and the identical units are allocated on each pixel over an input image. Each unit computes matching degree of each pixel with object and background class iteratively with considering the matching degree of neighbor units. The algorithm has mainly two features: 1) designing the fuzzy rules using Fuzzy Oriented Classifier Evolution (FORCE) which develops fuzzy rules represented as directed graphs flexibly and automatically by Genetic Algorithm, and 2) performing iterative segmentation with considering spatial relationship between pixels besides local features. In natural image segmentation, many pixels are overlapped between different clusters. Considering the spatial relationship is important to classify the overlapped pixels correctly. We applied CFORCE to three different object segmentation, and showed that CFORCE extracted object regions successfully.

Paper Nr: 86
Title:

Self-adaptive Topology Neural Network for Online Incremental Learning

Authors:

Beatriz Pérez-Sánchez, Oscar Fontenla-Romero and Bertha Guijarro-Berdiñas

Abstract: Many real problems in machine learning are of a dynamic nature. In those cases, the model used for the learning process should work in real time and have the ability to act and react by itself, adjusting its controlling parameters, even its structures, depending on the requirements of the process. In a previous work, the authors proposed an online learning method for two-layer feedforward neural networks that presents two main characteristics. Firstly, it is effective in dynamic environments as well as in stationary contexts. Secondly, it allows incorporating new hidden neurons during learning without losing the knowledge already acquired. In this paper, we extended this previous algorithm including a mechanism to automatically adapt the network topology in accordance with the needs of the learning process. This automatic estimation technique is based on the Vapnik-Chervonenkis dimension. The theoretical basis for the method is given and its performance is illustrated by means of its application to distint system identification problems. The results confirm that the proposed method is able to check whether new hidden units should be added depending on the requirements of the online learning process.

Paper Nr: 88
Title:

Semantic Anonymisation of Set-valued Data

Authors:

Montserrat Batet, Arnau Erola, David Sánchez and Jordi Castellà-Roca

Abstract: It is quite common that companies and organisations require of releasing and exchanging information related to individuals. Due to the usual sensitive nature of these data, appropriate measures should be applied to reduce the risk of re-identification of individuals while keeping as much data utility as possible. Many anonymisation mechanisms have been developed up to present, even though most of them focus on structured/relational databases containing numerical or categorical data. However, the anonymisation of transactional data, also known as set-valued data, has received much less attention. The management and transformation of these data presents additional challenges due to their variable cardinality and their usually textual and unbounded nature. Current approaches focusing on set-valued data are based on the generalisation of original values; however, this suffers from a high information loss derived from the reduced granularity of the output values. To tackle this problem, in this paper we adapt a well-known microaggregation anonymisation mechanism so that it can be applied to textual set-valued data. Moreover, since the utility of textual data is closely related to their meaning, special care has been put in preserving data semantics. To do so, appropriate semantic similarity and aggregation functions are proposed. Experiments conducted on a real set-valued data set show that our proposal better preserves data utility in comparison with non-semantic approaches.

Paper Nr: 93
Title:

Finding Outliers in Satellite Patterns by Learning Pattern Identities

Authors:

Fabien Bouleau and Christoph Schommer

Abstract: Spacecrafts provide a large set of on-board components information such as their temperature, power and pressure. This information is constantly monitored by engineers, who capture the outliers and determine whether the situation is abnormal or not. However, due to the large quantity of information, only a small part of the data is being processed or used to perform anomaly early detection. A common accepted research concept for anomaly prediction as described in literature yields on using projections, based on probabilities, estimated on learned patterns from the past (Fujimaki et al., 2005) and data mining methods to enhance the conventional diagnosis approach (Li et al., 2010). Most of them conclude on the need to build a pattern identity chart. We propose an algorithm for efficient outlier detection that builds an identity chart of the patterns using the past data based on their curve fitting information. It detects the functional units of the patterns without apriori knowledge with the intent to learn its structure and to reconstruct the sequence of events described by the signal. On top of statistical elements, each pattern is allotted a characteristics chart. This pattern identity enables fast pattern matching across the data. The extracted features allow classification with regular clustering methods like support vector machines (SVM). The algorithm has been tested and evaluated using real satellite telemetry data. The outcome and performance show promising results for faster anomaly prediction.

Paper Nr: 100
Title:

A Probabilistic Implementation of Emotional BDI Agents

Authors:

João Gluz and Patricia Jaques

Abstract: A very well known reasoning model in Artificial Intelligence is the BDI (Belief-Desire-Intention). A BDI agent should be able to choose the more rational action to be done with bounded resources and incomplete knowledge in an acceptable time. Although humans need emotions in order to make immediate decisions with incomplete information, traditional BDI models do not take into account affective states of the agent. In this paper we present an implementation of the appraisal process of emotions in BDI agents using a BDI language that integrates logic and probabilistic reasoning. Specifically, we implement the event-generated emotions with consequences for self based on the OCC cognitive psychological theory of emotions. We also present an illustrative scenario and its implementation. One original aspect of this work is that we implement the emotions intensity using a probabilistic extension of a BDI language. This intensity is defined by the desirability central value, as pointed by the OCC model. In this way, our implementation of an emotional BDI allows to differentiate between emotions and affective reactions. This is an important aspect because emotions tend to generate stronger response. Besides, the intensity of the emotion also determines the intensity of an individual reaction.

Paper Nr: 103
Title:

Classical Dynamic Controllability Revisited - A Tighter Bound on the Classical Algorithm

Authors:

Mikael Nilsson, Jonas Kvarnström and Patrick Doherty

Abstract: Simple Temporal Networks with Uncertainty (STNUs) allow the representation of temporal problems where some durations are uncontrollable (determined by nature), as is often the case for actions in planning. It is essential to verify that such networks are dynamically controllable (DC) – executable regardless of the outcomes of uncontrollable durations – and to convert them to an executable form. We use insights from incremental DC verification algorithms to re-analyze the original verification algorithm. This algorithm, thought to be pseudo-polynomial and subsumed by an O(n5) algorithm and later an O(n4) algorithm, is in fact O(n4) given a small modification. This makes the algorithm attractive once again, given its basis in a less complex and more intuitive theory. Finally, we discuss a change reducing the amount of work performed by the algorithm.

Paper Nr: 124
Title:

Robust Execution of Rover Plans via Action Modalities Reconfiguration

Authors:

Enrico Scala, Roberto Micalizio and Pietro Torasso

Abstract: Robust execution of exploration mission plans has to deal with limited computational power on-board a planetary rover, and with limited rover’s autonomy. In most cases, these limitations practically prevent the rover to synthesize a new mission plan when some unexpected contingency arises. The paper shows that when such deviations refers to anomalies on the consumption of resources, robust execution can be achieved efficiently through an action reconfiguration approach instead of a replanning from scratch. Building up on an extended action model representation, the paper proposes an effective continual planner - ReCon - that, exploiting a general purpose CSP solver, is able to (i) detect violations of mission resource constraints, and (ii) find (if any) a new configuration of actions.

Paper Nr: 140
Title:

Improving Query Expansion by Automatic Query Disambiguation in Intelligent Information Retrieval

Authors:

Oussama Ben Khiroun, Bilel Elayeb, Ibrahim Bounhas, Fabrice Evrard and Narjès Bellamine Ben Saoud

Abstract: We study in this paper the impact of Word Sense Disambiguation (WSD) on Query Expansion (QE) for monolingual intelligent information retrieval. The proposed approaches for WSD and QE are based on corpus analysis using co-occurrence graphs modelled by possibilistic networks. Indeed, our model for relevance judgment uses possibility theory to take advantages of a double measure (possibility and necessity). Our experiments are performed using the standard ROMANSEVAL test collection for the WSD task and the CLEF-2003 benchmark for the QE process in French monolingual Information Retrieval (IR) evaluation. The results show the positive impact of WSD on QE based on the recall/precision standard metrics.

Paper Nr: 143
Title:

Tracking Assembly Processes and Providing Assistance in Smart Factories

Authors:

Sebastian Bader and Mario Aehnelt

Abstract: Tracking assembly processes is a necessary prerequisite to provide assistance in smart factories. In this paper, we show how to track the construction of complex components. For this we employ formal task models as background knowledge and simple sensors like RFIDs. The background knowledge is converted into a probabilistic model that actually tracks the process. As a result, we are able to provide assistance in smart factories. We discuss the performance of the approach, as well as potential applications.

Paper Nr: 153
Title:

Image Quality Assessment using ANFIS Approach

Authors:

El-Sayed M. El-Alfy and Mohammed R. Riaz

Abstract: Due to the increasing use of digital images in electronic systems, it becomes important to evaluate the degradation in image quality during acquisition, processing, storage and transmission. In this paper, we investigate the ability of the adaptive neuro-fuzzy inference system (ANFIS) for quality assessment of digital images with respect to original (reference) images. Several metrics for objective quality assessment are calculated and used as inputs to an adaptive fuzzy inference system which in turn estimates a differential mean opinion score (DMOS) for different types of distortions. The predicted values are compared with the actual DMOS values using correlation and error measures. With 7-input ANFIS network, the results show that predicted DMOS values are highly correlated to the actual values using a publicly available and subjectively rated image database. For example, for distorted images due to JPEG 2000 compression, the attained results for correlation coefficient, Spearman’s ranked correlation, and RMSE are 0.9944, 0.9902, and 3.32, respectively. These results show that combining the advantages of neural networks with fuzzy systems can be a promising approach for predicting the subjective quality of digital images.

Paper Nr: 208
Title:

Multiagent Planning Supported by Plan Diversity Metrics and Landmark Actions

Authors:

Jan Tožička, Jan Jakubův, Karel Durkota, Antonín Komenda and Michal Pěchouček

Abstract: Problems of domain-independent multiagent planning for cooperative agents in deterministic environments can be tackled by a well-known initiator–participants scheme from classical multiagent negotiation protocols. In this work, we use the approach to describe a multiagent extension of the Generate-And-Test principle distributively searching for a coordinated multiagent plan. The generate part uses a novel plan quality estimation technique based on metrics borrowed from the field of diverse planning. The test part builds upon planning with landmarks by compilation to classical planning. Finally, the proposed multiagent planning approach was experimentally analyzed on one newly designed domain and one classical benchmark domain. The results show what combination of plan quality estimation and diversity metrics provide the best planning efficiency.

Paper Nr: 227
Title:

Agent-based Simulations of Patterns for Self-adaptive Systems

Authors:

Mariachiara Puviani, Giacomo Cabri and Franco Zambonelli

Abstract: Self-adaptive systems are distributed computing systems composed of different components that can adapt their behavior to different kinds of conditions. This adaptation does not concern the single components only, but the entire system. In a previous work we have identified several patterns for self-adaptation, classifying them by means of a taxonomy, which aims at being a support for developers of self-adaptive systems. Starting from that theoretical work, we have simulated the described self-adaptation patterns, in order to better understand the concrete and real features of each pattern. The contribution of this paper is to report about the simulation work, detailing how it was carried out, and to present a “table of applicability” that completes the initial taxonomy of patterns and provides a further support for the developers.

Paper Nr: 231
Title:

A Hierarchical Clustering Based Heuristic for Automatic Clustering

Authors:

François LaPlante, Nabil Belacel and Mustapha Kardouchi

Abstract: Determining an optimal number of clusters and producing reliable results are two challenging and critical tasks in cluster analysis. We propose a clustering method which produces valid results while automatically determining an optimal number of clusters. Our method achieves these results without user input pertaining directly to a number of clusters. The method consists of two main components: splitting and merging. In the splitting phase, a divisive hierarchical clustering method (based on the DIANA algorithm) is executed and interrupted by a heuristic function once the partial result is considered to be “adequate”. This partial result, which is likely to have too many clusters, is then fed into the merging method which merges clusters until the final optimal result is reached. Our method’s effectiveness in clustering various data sets is demonstrated, including its ability to produce valid results on data sets presenting nested or interlocking shapes. The method is compared with cluster validity analysis to other methods to which a known optimal number of clusters is provided and to other automatic clustering methods. Depending on the particularities of the data set used, our method has produced results which are roughly equivalent or better than those of the compared methods.

Paper Nr: 232
Title:

A Framework for High-throughput Gene Signatures with Microarray-based Brain Cancer Gene Expression Profiling Data

Authors:

Hung-Ming Lai, Andreas Albrecht and Kathleen Steinhöfel

Abstract: Cancer classification through high-throughput gene expression profiles has been widely used in biomedical research. Most recently, we portrayed a multivariate method for large scale gene selection based on information theory with the central issue of feature interdependence, and we validated its effectiveness using a colon cancer benchmark. The present paper further develops our previous work on feature interdependence. Firstly, we have refined the method and proposed a complete framework to select a gene signature for a certain disease phenotype prediction under high-throughput technologies. The framework has then been applied to a brain cancer gene expression profile derived from Affymetrix Human Genome U95Av2 Array, where the number of interrogated genes is six times larger than that in the previously studied colon cancer data set. Three information theory based filters were used for comparison. Our experimental results show that the framework outperforms them in terms of classification performance based upon three performance measures. Additionally, to demonstrate how effectively feature interdependence can be tackled within the framework, two sets of enrichment analysis have also been performed. The results also show that more statistically significant gene sets and regulatory interactions could be found in our gene signature. Therefore, this framework could be promising for high-throughput gene selection around gene synergy.

Paper Nr: 233
Title:

Interest Operator Analysis for Automatic Assessment of Spontaneous Gestures in Audiometries

Authors:

A. Fernández, J. Marey, M. Ortega and M. G. Penedo

Abstract: Hearing loss is a common disease which affects a large percentage of the population. Hearing loss may have a negative impact on health, social participation, and daily activities, so its diagnosis and monitoring is indeed important. The audiometric tests related to this diagnosis are constrained when the patient suffers from some form of cognitive impairment. In these cases, audilogist must try to detect particular facial reactions that may indicate auditory perception. With the aim of supporting the audiologist in this evaluation, a screening method that analyzes video sequences and seeks for facial reactions within the eye area was proposed. In this research, a comprehensive survey of one of the most relevent steps of this methodology is presented. This survey considers different alternatives for the detection of the interest points and the classsification techniques. The provided results allow to determine the most suitable configuration for this domain.

Paper Nr: 234
Title:

A Composed Confidence Measure for Automatic Face Recognition in Uncontrolled Environment

Authors:

Pavel Král and Ladislav Lenc

Abstract: This paper is focused on automatic face recognition in order to annotate people in photographs taken in completely uncontrolled environment. Recognition accuracy of the current approaches is not sufficient in this case and it is thus beneficial to improve the results. We would like to solve this issue by proposing a novel confidence measure method to identify the incorrectly classified examples at the output of our classifier. The proposed approach combines two measures based on the posterior probability and two ones based on the predictor features in a supervised way. The experiments show that the proposed approach is very efficient, because it detects almost all erroneous examples.

Short Papers
Paper Nr: 27
Title:

Providing Accessibility to Hearing-disabled by a Basque to Sign Language Translation System

Authors:

María del Puy Carretero, Miren Urteaga, Aitor Ardanza, Mikel Eizagirre, Sara García and David Oyarzun

Abstract: Translation between spoken languages and Sign Languages is especially weak regarding minority languages; hence, audiovisual material in these languages is usually out of reach for people with a hearing impairment. This paper presents a domain-specific Basque text to Spanish Sign Language (LSE) translation system. It has a modular architecture with (1) a text-to-Sign Language translation module using a Rule- Based translation approach, (2) a gesture capture system combining two motion capture system to create an internal (3) sign dictionary, (4) an animation engine and a (5) rendering module. The result of the translation is performed by a virtual interpreter that executes the concatenation of the signs according to the grammatical rules in LSE; for a better LSE interpretation, its face and body expressions change according to the emotion to be expressed. A first prototype has been tested by LSE experts with preliminary satisfactory results.

Paper Nr: 29
Title:

Experiments Assessing Learning of Agent Behavior using Genetic Programming with Multiple Trees

Authors:

Takashi Ito, Kenichi Takahashi and Michimasa Inaba

Abstract: In this paper, experiments to assess agent behavior learning are conducted to demonstrate the performance of genetic programming (GP) with multiple trees. Using the methods, each has a chromosome representing agent behavior as several trees. We have proposed two variants using the conditional probability and the island model to improve the methods’ performance. In GP using the conditional probability, individuals with high fitness values are used to produce conditional probability tables to generate individuals in the next generation. In GP using the island model, the population is divided into two islands of individuals: one island maintains diversity of individuals. The other emphasizes the accuracy of the solution. Moreover, this paper improves methods to seek the optimal number of executions of each tree in an individual. Those methods are applied to a garbage collection problem and a Santa Fe Trail problem. They are compared with traditional GP, GP with control nodes, and genetic network programming (GNP) with control nodes. Experimental results show that our methods are effective for improving the fitness.

Paper Nr: 47
Title:

Predictive Text System for Bahasa with Frequency, n-gram, Probability Table and Syntactic using Grammar

Authors:

Derwin Suhartono, Garry Wong, Polim Kusuma and Silviana Saputra

Abstract: Predictive text system is an alternative way to improve human communication, especially in matter of typing. Originally, predictive text system was intended for people who have flaws in verbal and motor. This system is aimed to all people who demands speed and accuracy in typing a document. There were many similar researches which develop this system that had their own strengths and weaknesses. This research attempts to develop the algorithm for predictive text system by combining four methods from previous researches and focus only in Bahasa (Indonesian language). The four methods consist of frequency, n-gram, probability table, and syntactic using grammar. Frequency method is used to rank words based on how many times the words were typed. Probability table is a table designed for storing data such as predefined phrases and trained data. N-gram is used to train data so that it is able to predict the next word based on previous word. And syntactic using grammar will predict the next word based on syntactic relationship between previous word and next word. By using this combination, user can reduce the keystroke up to 59% in which the average keystrokes saving is about 50%.

Paper Nr: 51
Title:

Validation of a Cognitive Map - Definition of Quality Criteria to Detect Contradictions in a Cognitive Map

Authors:

Aymeric Le Dorze, Laurent Garcia, David Genest and Stéphane Loiseau

Abstract: A cognitive map is a knowledge representation model. Knowledge is represented as a graph where nodes represent concepts and arcs represent influences between these concepts. Each influence has a value that quantifies it. Despite the fact that a cognitive map is quite simple to build, some influence values may contradict each other. This paper provides some quality criteria in order to validate a cognitive map. There are two kinds of quality criteria. The verification validates a cognitive map by computing its internal coherency. The test validates a map from a set of constraints provided by the designer. These criteria indicate if a map does or does not contain contradictions. We also propose a way to adapt these criteria according to the possible values that an influence can take.

Paper Nr: 53
Title:

Monte Carlo Tree Search in The Octagon Theory

Authors:

Hugo Fernandes, Pedro Nogueira and Eugénio Oliveira

Abstract: Monte Carlo Tree Search (MCTS) is a family of algorithms known by its performance in difficult problems that cannot be targeted with the current technology using classical AI approaches. This paper discusses the application of MCTS techniques in the fixed-length game The Octagon Theory, comparing various policies and enhancements with the best known greedy approach and standard Monte Carlo Search. The experiments reveal that the usage of Move Groups, Decisive Moves, Upper Confidence Bounds for Trees (UCT) and Limited Simulation Lengths turn a losing MCTS agent into the best performing one in a domain with estimated gametree complexity of 10293, even when the provided computational budget is kept low.

Paper Nr: 59
Title:

Agent-based Manufacturing in a Production Grid - Adapting a Production Grid to the Production Paths

Authors:

Leo van Moergestel, Daniël Telgen, Erik Puik and John-Jules Meyer

Abstract: In standard mass production, batch processing is widely accepted. The advantage of batch processing is that production equipment can be placed in a so called production line. A product only has to follow this line and all production steps will be performed. However, this set-up is not adequate for low cost small quantity production. In this paper, agile production of small quantities in a grid of reconfigurable production machines called equiplets is described. One of the challenges in this approach is the transport of the product between the equiplets during production. This paper describes some heuristic methods to reduce the average path a product has to follow in the production grid.

Paper Nr: 72
Title:

Evaluation of Safe Explosive Charge in Surface Mines using Artificial Neural Network

Authors:

Manoj Khandelwal

Abstract: The present paper mainly deals with the prediction of maximum explosive charge used per delay (QMAX) using artificial neural network (ANN) incorporating peak particle velocity (PPV) and distance between blast face to monitoring point (D). 150 blast vibration data sets were monitored at different vulnerable and strategic locations in and around major coal producing opencast coal mines in India. 124 blast vibrations records were used for the training of the ANN model vis-à-vis to determine site constants of various conventional vibration predictors. Rest 26 new randomly selected data sets were used to test, evaluate and compare the ANN prediction results with widely used conventional predictors. Results were compared based on coefficient of correlation (R) and mean absolute error (MAE) between calculated and predicted values of QMAX.

Paper Nr: 84
Title:

An Adaptive Model of Bus Arrival Time Prediction

Authors:

Ling Xie, Peifeng Li and Qiaoming Zhu

Abstract: Predicting bus arrival time is the foundation of intelligent bus information service. Reliable prediction of bus arrival time is beneficial to improve the public transport service level and attracts more and more city residents to use public transportation. Based on the massive historical data of a real-time bus transportation system, an adaptive model of bus arrival time prediction model is proposed in this paper. Based on the volatility of bus arrival time, this model chooses adaptive prediction methods according to different dates and different time periods. Besides, the further optimization for local prediction also is introduced to this model, which divides the prediction route into different combinations, considering the volatility of adjacent sections. The experimental results show that our model outperforms those existed models in RMSE and MAPE.

Paper Nr: 90
Title:

Ranking Functions for Belief Change - A Uniform Approach to Belief Revision and Belief Progression

Authors:

Aaron Hunter

Abstract: In this paper, we explore the use of ranking functions in reasoning about belief change. It is well-known that the semantics of belief revision can be defined either through total pre-orders or through ranking functions over states. While both approaches have similar expressive power with respect to single-shot belief revision, we argue that ranking functions provide distinct advantages at both the theoretical level and the practical level, particularly when actions are introduced. We demonstrate that belief revision induces a natural algebra over ranking functions, which treats belief states and observations in the same manner. When we introduce belief progression due to actions, we show that many natural domains can be easily represented with suitable ranking functions. Our formal framework uses ranking functions to represent belief revision and belief progression in a uniform manner; we demonstrate the power of our approach through formal results, as well as a series of natural problems in commonsense reasoning.

Paper Nr: 107
Title:

Modeling and Algorithm for Dynamic Multi-objective Weighted Constraint Satisfaction Problem

Authors:

Tenda Okimoto, Tony Ribeiro, Maxime Clement and Katsumi Inoue

Abstract: A Constraint Satisfaction Problem (CSP) is a fundamental problem that can formalize various applications related to Artificial Intelligence problems. A Weighted Constraint Satisfaction Problem (WCSP) is a CSP where constraints can be violated, and the aim of this problem is to find an assignment that minimizes the sum of weights of the violated constraints. Most researches have focused on developing algorithms for solving static mono-objective problems. However, many real world satisfaction/optimization problems involve multiple criteria that should be considered separately and satisfied/optimized simultaneously. Additionally, they are often dynamic, i.e., the problem hanges at runtime. In this paper, we introduce a Multi-Objective WCSP (MO-WCSP) and develop a novel MO-WCSP algorithm called Multi-Objective Branch and Bound (MO-BnB), which is based on a new solution criterion called (l, s)-Pareto solution. Furthermore, we first formalize a DynamicMO-WCSP (DMO-WCSP). As an initial step towards developing an algorithm for solving a DMO-WCSP, we focus on the change of weights of constraints and develop the first algorithm called Dynamic Multi-Objective Branch and Bound (DMO-BnB) for solving a DMO-WCSPs, which is based on MO-BnB. Finally, we provide the complexity of our algorithms and evaluate DMO-BnB with different problem settings.

Paper Nr: 113
Title:

Modal Specifications for Composition of Agent Behaviors

Authors:

Hikmat Farhat and Guillaume Feuillade

Abstract: The goal of the behavior composition problem is to build a complex target behavior using several agent behaviors. We propose two extensions to the framework where agent behaviors are modeled by finite transition system and where the composition is done by coordinating the actions of the agents. The first extension is done by making the composition indirect: instead of choosing the actions of the agent, the composition is done by a controller issuing sets of instructions at each step. This allows to model problems where the agents behaviors are not fully controllable. The second extension is the use of modal specifications as a goal for the composition. These specifications express (infinite) sets of acceptable behaviors. We give an algorithm to solve the extended composition problem and we show that these two extensions retain the important properties of the initial framework and that the synthesis algorithm keep the same complexity.

Paper Nr: 118
Title:

Why you should Empirically Evaluate your AI Tool - From SPOSH to yaPOSH

Authors:

Jakub Gemrot, Martin Černý and Cyril Brom

Abstract: The autonomous agents community has been developing specific agent-oriented programming languages for more than two decades. Some of the languages have been considered by academia as possible tools for developing artificial intelligence (AI) for non-player characters in computer games. However, as most of the research related to the development of new AI languages within the agent community does not reach production quality, they are seldom adopted by the games industry. As our experience has shown, it is not only the actual language that matters. The toolchain supporting the language and its integration (or lack thereof) with a development environment can make or break the success of the language in practical applications. In this paper, we describe our methodology for evaluating AI languages and associated tools in practice based on controlled experiments with programmers and/or game designers. The methodology is demonstrated on our development and evaluation of SPOSH and yaPOSH high level agent behavior languages. We show that incomplete development support may prevent the tool from giving any benefit to developers at all. We also present our experience from transferring knowledge gained during yaPOSH development to actual AI design for an upcoming AAA game.

Paper Nr: 136
Title:

Medical-treatment Recommendation and the Integration of Process Models into Knowledge-based Systems

Authors:

Laia Subirats, Luigi Ceccaroni, Jose María Maroto, Carmen de Pablo and Felip Miralles

Abstract: Decision making based on evidence other than human reasoning is becoming increasingly important in healthcare. Valuable evidence is in the form of treatment processes used by healthcare institutions and this paper presents a new framework for representing and modeling knowledge from these processes. Specifically, it presents the integration of data from literature, business processes and decision trees through workflows that cover the full cycle of health care, from diagnosis to prognosis and treatment. With respect to patient status, as single instants cannot convey sufficient information, time series are analyzed and classified to improve decision-making ability. The elicitation of new knowledge takes into account international standards, ontologies, information models, nomenclatures and multiple types of indicators. The integration of formal process-modeling in knowledge-based systems is exemplified by a real-world recommendation scenario. After evaluation with a medical-rehabilitation data set, results show a strong correspondence between treatment recommended by the proposed system and clinical practice.

Paper Nr: 144
Title:

Fuzzy Cognitive Map Reconstruction - Methodologies and Experiments

Authors:

Wladyslaw Homenda, Agnieszka Jastrzebska and Witold Pedrycz

Abstract: The paper is focused on fuzzy cognitive maps - abstract soft computing models, which can be applied to model complex systems with uncertainty. The authors present two distinct methodologies for fuzzy cognitive map reconstruction based on gradient learning. Both theoretical and practical issues involved in the process of a map reconstruction are discussed. Among researched and described aspects are: map sizes, data dimensionality, distortions, optimization procedure, etc. Theoretical results are supported by a series of experiments, that allow to evaluate the quality of the developed approach. The authors compare both procedures and discuss practical issues, that are entailed in the developed methodology. The goal of this study is to investigate theoretical and practical problems, that are relevant in the Fuzzy Cognitive Map reconstruction process.

Paper Nr: 148
Title:

Motivational Strategies to Support Engagement of Learners in Serious Games

Authors:

Ramla Ghali, Maher Chaouachi, Lotfi Derbali and Claude Frasson

Abstract: The use of Video Games as learning tool is becoming increasingly widespread. Indeed, these games are well known as educational games or serious games. They mainly aim at providing to the learner an interactive, motivational and educational environment at the same time. In order to better study the necessary characteristics for the development of an effective serious game (both motivational and educational), we evaluated the physiological responses of participants during their interaction with our serious game, called HeapMotiv. We essentially measured a physiological index of engagement through an EEG wifi headset and studied the evolution of this index with the different missions and motivational strategies of HeapMotiv. Focusing on the gaming aspects, the analysis of this engagement index behavior showed the significant impact of motivational strategies on skills acquisition and motivational experience. An agent-based architecture is proposed as a methodological basis for serious games conception.

Paper Nr: 168
Title:

Plateau in a Polar Variable Complex-valued Neuron

Authors:

Tohru Nitta

Abstract: In this paper, the characteristics of the complex-valued neuron model with parameters represented by polar coordinates (called polar variable complex-valued neuron) are investigated. The main results are as reported below. The polar variable complex-valued neuron is unidentifiable: there exists a parameter that does not affect the output value of the neuron and one cannot identify its value. The plateau phenomenon can occur during learning of the polar variable complex-valued neuron: the learning error does not decrease in a period. Furthermore, it is suggested by computer simulations that a single polar variable complex-valued neuron has the following characteristics: (a) Unidentifiable parameters (singular points) degrade the learning speed. (b) A plateau can occur during learning. When the weight is attracted to the singular point, the learning tends to be stuck.

Paper Nr: 169
Title:

Using Word Sense as a Latent Variable in LDA Can Improve Topic Modeling

Authors:

Yunqing Xia, Guoyu Tang, Huan Zhao, Erik Cambria and Thomas Fang Zheng

Abstract: Since proposed, LDA have been successfully used in modeling text documents. So far, words are the common features to induce latent topic, which are later used in document representation. Observation on documents indicates that the polysemous words can make the latent topics less discriminative, resulting in less accurate document representation. We thus argue that the semantically deterministic word senses can improve quality of the latent topics. In this work, we proposes a series of word sense aware LDA models which use word sense as an extra latent variable in topic induction. Preliminary experiments on document clustering on benchmark datasets show that word sense can indeed improve topic modeling.

Paper Nr: 171
Title:

Computational Models of Classical Conditioning - A Qualitative Evaluation and Comparison

Authors:

Eduardo Alonso, Pavandeep Sahota and Esther Mondragón

Abstract: Classical conditioning is a fundamental paradigm in the study of learning and thus in understanding cognitive processes and behaviour, for which we need comprehensive and accurate models. This paper aims at evaluating and comparing a collection of influential computational models of classical conditioning by analysing the models themselves and against one another qualitatively. The results will clarify the state of the art in the area and help develop a standard model of classical conditioning.

Paper Nr: 176
Title:

Fundamental Artificial Intelligence - Machine Performance in Practical Turing Tests

Authors:

Huma Shah, Kevin Warwick, Ian M. Bland and Chris D. Chapman

Abstract: Fundamental artificial intelligence is founded on Turing’s imitation game. This can be implemented in two different ways: a simultaneous comparison 3-participant test, and a 2-participant viva voce test. In the former, the human interrogator questions two hidden interlocutors in parallel deciding which is the human and which is the machine. In the latter test, the judge interrogates one hidden entity and decides whether it is a human or a machine. The results from an original experiment conducted at Bletchley Park in June 2012 implementing both tests side-by-side showed the simultaneous comparison was a stronger test for artificial intelligence.

Paper Nr: 177
Title:

Neural Multi-agent-based Approach for Preventing Blackouts in Power Systems

Authors:

Michael Negnevitsky, Nikita Tomin, Daniil Panasetsky, Ulf Haeger, Nikolay Voropai, Christian Rehtanz and Victor Kurbatsky

Abstract: A neural multi-agent-based approach for system monitoring and preventing large-scale emergencies in power systems is presented in this paper. The automatic emergency control process is represented as a neural multi-agent system with hierarchical architecture. The proposed system consist of two main parts: the alarm trigger, a Kohonen neural network-based system for early detection of possible alarm states in a power system, and the competitive–collaborative multi-agent control system. For demonstration purposes, we investigated conventional and neural multi-agent automatic control schemes. Results are presented and discussed.

Paper Nr: 178
Title:

Designing Cloud Data Warehouses using Multiobjective Evolutionary Algorithms

Authors:

Tansel Dökeroğlu, S. Alper Sert, M. Serkan Çinar and A. Coşar

Abstract: DataBase as a Service (DBaaS) providers need to improve their existing capabilities in data management and balance the efficient usage of virtual resources to multi-users with varying needs. However, there is still no existing method that concerns both with the optimization of the total ownership price and the performance of the queries of a Cloud data warehouse by taking into account the alternative virtual resource allocation and query execution plans. Our proposed method tunes the virtual resources of a Cloud to a data warehouse system, whereas most of the previous studies used to tune the database/queries to a given static resource setting. We solve this important problem with an exact Branch and Bound algorithm and a robust Multiobjective Genetic Algorithm. Finally, through several experiments we conclude remarkable findings of the algorithms we propose.

Paper Nr: 180
Title:

A Multi-demand Adaptive Bargaining based on Fuzzy Logic

Authors:

Jieyu Zhan, Xudong Luo, Wenjun Ma and Youzhi Zhang

Abstract: Nowadays, decisions in estate investment are made by a group of investors with different demands and then how to find an agreement among them become an essential issue. Thus, this paper introduces a fuzzy logic based bargaining model to solve such problems. Moreover, we also do lots of simulation experiments to reveal how bargainers’ risk attitude, patience and regret degree influence the outcome of a game, and benchmark our model with the previous one. From these experiments, we can conclude that our model can reflect the human intuitions well, has a higher success rate, and bargains more efficiently than the previous one.

Paper Nr: 182
Title:

Research Proposal in Probabilistic Planning Search

Authors:

Yazmin S. Villegas-Hernandez and Federico Guedea-Elizalde

Abstract: In planning search, there are different approaches to guide the search, where all of them are focused in have a plan (solution) in less time. Most of the researches are not admissible heuristics, but they have good results in time. For example, using the heuristic-search planning approach plans can be generated in less time than other approaches, but the plans generated by all heuristic planners are sub-optimal, or could have dead ends (states from which the goals get unreachable). We present an approach to guide the search in a probabilistic way in order to do not have the problems of the not admissible approaches. We extended the Bayesian network and Bayesian inferences ideas to our work. Furthermore, we present our way to make Bayesian inferences in order to guide the search in a better way. The results of our experiments of our approach with different well-known benchmarks are presented. The benchmarks used in our experiments are: Driverlog, Zenotravel, Satellite, Rovers, and Freecell.

Paper Nr: 183
Title:

AgentSlang: A New Distributed Interactive System - Current Approaches and Performance

Authors:

Ovidiu Șerban and Alexandre Pauchet

Abstract: This paper proposes a generic platform for developing fast and reliable Distributed Interactive Systems. The modelling is based on a component design approach, with element structure simple and versatile enough to allow the integration of existing algorithms. The AgentSlang platform consists in a series of original components integrated with several existing algorithms, to provide a development environment for Interactive Systems. There are several original parts in our approach. First, the platform is based on a data and component oriented design, which integrates into a unified system the concept of Feedback Management, Dialogue Management and a flexible component architecture. Second, the Syn!bad language, is integrated as a component of AgentSlang. Third, the message exchange speed is superior of any existing platforms, even in the context of providing extra features, such as action execution feedback and data type consistency check.

Paper Nr: 188
Title:

From Inter-agent to Intra-agent Representations - Mapping Social Scenarios to Agent-role Descriptions

Authors:

Giovanni Sileno, Alexander Boer and Tom van Engers

Abstract: The paper introduces elements of a methodology for the acquisition of descriptions of social scenarios (e.g. cases) and for their synthesis to agent-based models. It proceeds along three steps. First, the case is analyzed at signal layer, i.e. the messages exchanged between actors. Second, the signal layer is enriched with implicit actions, intentions, and conditions necessary for the story to occur. This elicitation is based on elements provided with the story, common-sense, expert knowledge and direct interaction with the narrator. Third, the resulting scenario representation is synthesized as agent programs. These scripts correspond to descriptions of agent-roles observed in that social setting.

Paper Nr: 190
Title:

Reducing Sample Complexity in Reinforcement Learning by Transferring Transition and Reward Probabilities

Authors:

Kouta Oguni, Kazuyuki Narisawa and Ayumi Shinohara

Abstract: Most existing reinforcement learning algorithms require many trials until they obtain optimal policies. In this study, we apply transfer learning to reinforcement learning to realize greater efficiency. We propose a new algorithm called TR-MAX, based on the R-MAX algorithm. TR-MAX transfers the transition and reward probabilities from a source task to a target task as prior knowledge. We theoretically analyze the sample complexity of TR-MAX. Moreover, we show that TR-MAX performs much better in practice than R-MAX in maze tasks.

Paper Nr: 194
Title:

One-Step or Two-Step Optimization and the Overfitting Phenomenon - A Case Study on Time Series Classification

Authors:

Muhammad Marwan Muhammad Fuad

Abstract: For the last few decades, optimization has been developing at a fast rate. Bio-inspired optimization algorithms are metaheuristics inspired by nature. These algorithms have been applied to solve different problems in engineering, economics, and other domains. Bio-inspired algorithms have also been applied in different branches of information technology such as networking and software engineering. Time series data mining is a field of information technology that has its share of these applications too. In previous works we showed how bio-inspired algorithms such as the genetic algorithms and differential evolution can be used to find the locations of the breakpoints used in the symbolic aggregate approximation of time series representation, and in another work we showed how we can utilize the particle swarm optimization, one of the famous bio-inspired algorithms, to set weights to the different segments in the symbolic aggregate approximation representation. In this paper we present, in two different approaches, a new meta optimization process that produces optimal locations of the breakpoints in addition to optimal weights of the segments. The experiments of time series classification task that we conducted show an interesting example of how the overfitting phenomenon, a frequently encountered problem in data mining which happens when the model overfits the training set, can interfere in the optimization process and hide the superior performance of an optimization algorithm.

Paper Nr: 200
Title:

Social Cognition in Silica - A ‘Theory of Mind’ for Socially Aware Artificial Minds

Authors:

Michael Harré

Abstract: Each of us has an incredibly large repertoire of behaviours from which to select from at any given time, and as our behavioural complexity grows so too does the possibility that we will misunderstand each other’s actions. However, we have evolved a cognitive mechanism that allows us to understand another person’s psychological space: their motivations, constraints, plans, goals and emotional state and it is called our ‘Theory of Mind’. This capability allows us to understand the choices another person might make on the basis that the other person has their own ‘internal world’ that influences their choices in the same way as our own internal world influences our choices. Arguably, this is one of the most significant cognitive developments in human evolutionary history, along with our ability for long term adaptation to familiar situations and our ability to reason dynamically in completely novel situations. So the question arises: Can we implement the rudimentary foundations of a human-like Theory of Mind in an artificial mind such that it can dynamically adapt to the likely decisions of another mind (artificial or biological) by holding an internal representation of that other mind? This article argues that this is possible and that we already have much of the necessary theoretical foundations in order to begin the development process.

Paper Nr: 202
Title:

Trust-based Personal Information Management in SOA

Authors:

Guillaume Feuillade, Andreas Herzig and Kramdi Seifeddine

Abstract: Service Oriented Architecture (SOA) enables cooperation in an open and highly concurrent context. In this paper, we investigate the management of personal information by an SOA service consumer while invoking composed services, where we will study the balance between quality of service (that works better when provided with our personal data) and the consumer’s data access policy.We present a service architecture that is based on an open epistemic multi-agent. We describe a logic-based trust module that a service consumer can use to assess and explain his trust toward composed services (which are perceived as composed actions executed by a group of agents in the system).We then illustrate our solution in a case study involving a professional social network.

Paper Nr: 215
Title:

Development of a Safest Routing Algorithm for Evacuation Simulation in Case of Fire

Authors:

Denis Shikhalev, Renat Khabibulin and Armel Ulrich Kemloh Wagoum

Abstract: Route choice of pedestrians during an emergency evacuation can be influenced by many factors. In this contribution we elaborate three criteria to consider during an evacuation with a fire hazard. The criteria are combined in an objective function which is minimized during the simulation. The function defines the safeness of a route. In addition an algorithm is presented which evaluates and redirects the pedestrians to the safest path during the simulation. The algorithm shows a positive impact on the evacuation time and overall on the safety during an evacuation simulation. A long term goal of the presented algorithm could be the integration in an evacuation system that gives instructions or recommendations during the evacuation process using dynamic indicators.

Paper Nr: 217
Title:

Restoration of Archaeological Artifacts by a Genetic Algorithm with Image Features

Authors:

Koji Kashihara

Abstract: Archaeological artifacts have been discovered all over the world. The restoration work of archaeological artifacts broken into pieces contains positioning problems. Therefore, an intelligent computer-assisted system was proposed to rebuild archaeological discoveries from fragments. A real coded genetic algorithm (GA) and a hill-climbing algorithm was evaluated to reconstruct a 3D object. The fitness function value for the GA was computed from image features of the object. The ORB (Oriented FAST and Rotated BRIEF) technique was used for solving the positional problem by the GA. The proposed method based on the GA with the image features was able to efficiently regulate the 3D surfaces. In further researches, the proposed method for 3D rebuilding could be applied to various practical applications.

Paper Nr: 218
Title:

Robot Cognition using Bayesian Symmetry Networks

Authors:

Anshul Joshi, Thomas C. Henderson and Wenyi Wang

Abstract: (Leyton, 2001) proposes a generative theory of shape, and general cognition, based on group actions on sets as defined by wreath products. This representation relates object symmetries to motor actions which produce those symmetries. Our position expressed here is that this approach provides a strong basis for robot cognition when: 1. sensory data and motor data are tightly coupled during analysis, 2. specific instances and general concepts are structured this way, and 3. uncertainty is characterized using a Bayesian framework. Our major contributions are (1) algorithms for symmetry detection and to realize wreath product analysis, and (2) a Bayesian characterization of the uncertainty in wreath product concept formation.

Paper Nr: 220
Title:

Smart Areas - A Modular Approach to Simulation of Daily Life in an Open World Video Game

Authors:

Martin Cerny, Tomas Plch, Matej Marko, Petr Ondracek and Cyril Brom

Abstract: Constructing believable behavior of non-player characters (NPCs) for large open worlds in computer games is a challenging application of AI. One of the greatest obstacles for practical game applications lies in managing the complexity of individual behaviors and in managing their development cycle. We propose the use of “Smart areas” to overcome these obstacles and allow for realistic simulation of NPCs day-to-day life and describe a particular implementation for an upcoming AAA game. For practical applications it is also vital to resolve usability issues and assess the productivity of the technology. We have conducted a qualitative study with 8 subjects that compares the performance of working with Smart Areas to using default AI tools. The study indicates that Smart Areas are not difficult to understand, allow for substantial code reuse, resulting in speedup in modification of existing behaviors, and force good structuring of behavior code.

Paper Nr: 223
Title:

Automatic Generation of Questionnaires for Managing Configurable BP Models

Authors:

A. Jiménez-Ramírez, B. Weber, I. Barba and C. Del Valle

Abstract: Managing large collections of business process (BP) models is increasingly being necessary for organizations. For this, configurable BP models can be used for managing these BPs while allowing analysts to understand what these BPs share and what their differences are. Before the execution of the configurable BP model, a BP model has to be selected from it. This selection is typically performed by an analyst who manually individualizes the model in order to address the business requirements. Unlike existing approaches, we propose a totally automated method to create a questionnaire-based application for guiding a business expert on individualizing a model.

Paper Nr: 230
Title:

Combining Simulated Annealing and Monte Carlo Tree Search for Expression Simplification

Authors:

Ben Ruijl, Jos Vermaseren, Aske Plaat and Jaap van den Herik

Abstract: In many applications of computer algebra large expressions must be simplified to make repeated numerical evaluations tractable. Previous works presented heuristically guided improvements, e.g., for Horner schemes. The remaining expression is then further reduced by common subexpression elimination. A recent approach successfully applied a relatively new algorithm, Monte Carlo Tree Search (MCTS) with UCT as the selection criterion, to find better variable orderings. Yet, this approach is fit for further improvements since it is sensitive to the so-called “exploration-exploitation” constant Cp and the number of tree updates N. In this paper we propose a new selection criterion called Simulated Annealing UCT (SA-UCT) that has a dynamic exploration-exploitation parameter, which decreases with the iteration number i and thus reduces the importance of exploration over time. First, we provide an intuitive explanation in terms of the exploration-exploitation behavior of the algorithm. Then, we test our algorithm on three large expressions of different origins. We observe that SA-UCT widens the interval of good initial values Cp where best results are achieved. The improvement is large (more than a tenfold) and facilitates the selection of an appropriate Cp.

Paper Nr: 235
Title:

Tear Film Maps based on the Lipid Interference Patterns

Authors:

Beatriz Remeseiro, Antonio Mosquera, Manuel G. Penedo and Carlos García-Resúa

Abstract: Dry eye syndrome is characterized by symptoms of discomfort, ocular surface damage, reduced tear film stability, and tear hyperosmolarity. These features can be identified by several types of diagnostic tests, although there may not be a direct correlation between the severity of symptoms and the degree of damage. One of the most used clinical tests is the analysis of the lipid interference patterns, which can be observed on the tear film, and their classification into the Guillon categories. Our previous researches have demonstrated that the interference patterns can be characterized as color texture patterns. Thus, the manual test done by experts can be performed through an automatic process which saves time for experts and provides unbiased results. Nevertheless, the heterogeneity of the tear film makes the classification of a patient’s image into a single category impossible. For this reason, this paper presents a methodology to create tear film maps based on the lipid interference patterns. In this way, the output image represents the distribution and prevalence of the Guillon categories on the tear film. The adequacy of the proposed methodology was demonstrated since it achieves reliable results in comparison with the annotations done by experts.

Paper Nr: 237
Title:

Energy-efficient Multicast Routing by using Genetic Local Search

Authors:

Valery Katerinchuk, Andreas Albrecht and Kathleen Steinhöfel

Abstract: Energy-efficient multicast routing algorithms have predominantly focused on wireless or ad-hoc mobile networks. However, since the turn of the century the need for energy efficient approaches to routing in wired networks has been steadily rising. In this paper, we introduce an objective function for multicast routing in wired networks taking energy consumption into consideration. A number of hybrid Genetic and Simulated Annealing based algorithms have been shown to be able to find better solutions to the multicast routing problem compared to solely Genetic or Simulated Annealing based algorithms. Our approach adapts a population-based hybrid algorithm for routing multiple simultaneous multicast requests. We examine the performance in terms of energy efficiency against solutions found by Logarithmic Simulated Annealing and Genetic based algorithms. We find that the hybrid approach, in 87% of instances, was able to find superior solutions, and in 96% of instances, solutions superior or equal to the best solution given by either Simulated Annealing or Genetic approaches. The extent of the improvement however varied greatly from a few hundred to within ten Joules, with the improvement on the best solution ranging from 5.6 to 531.5 Joules.

Posters
Paper Nr: 19
Title:

Gender Classification based on Fingerprints using SVM

Authors:

Romany F. Mansour, Abdulsamad Al-Marghilnai and Meshrif Alruily

Abstract: The fingerprint is commonly used biometric method for person identification. It is the most conventional and widely used technique in forensics and criminalities. Identification of the person's age and gender based on his/her fingerprint is an important step in overall person's identification. The aim of this research paper is to propose a gender classification technique based on fingerprint characteristics of individuals using discrete cosine transform (DCT). Gender classification evaluated using dimensionality reduction techniques such as Principal Component Analysis (PCA), along with Support Vector Machine (SVM). A dataset of 2600 persons of different ages and sex was collected as internal database. Of the samples tested, 1250 samples of 1375 exactly identified male samples and 1085 samples of 1225 exactly identified female samples.

Paper Nr: 23
Title:

Adaptation Schemes for Question's Level to be Proposed by Intelligent Tutoring Systems

Authors:

Rina Azoulay, Esther David, Dorit Hutzler and Mireille Avigal

Abstract: The main challenge in developing a good Intelligent Tutoring System (ITS) is suit the difficulty level of questions and tasks to the current student's capabilities. According to state of the art, most ITS systems use the Q-learning algorithm for this adaptation task. Our paper presents innovative results that compare the performance of several methods, most of which have not been previously applied for ITS, to handle the above challenge. In particular, to the best of our knowledge, this is the first attempt to apply the Bayesian inference algorithm to question level matching in ITS. To identify the best adaptation scheme based on this groundwork research, for the evaluation phase we used an artificial environment with simulated students. The results were benchmarked with the optimal performance of the system, assuming the user model (abilities) is completely known to the ITS. The results show that the best performing method, in most of the environments considered, is based on a Bayesian Inference, which achieved 90% or more of the optimal performance. Our conclusion is that it may be worthwhile to integrate Bayesian inference based algorithms to adapt questions to a student's level in ITS. Future work is required to apply these empirical results to environments with real students.

Paper Nr: 33
Title:

Monitoring of Grinding Burn by AE and Vibration Signals

Authors:

Rodolpho F. Godoy Neto, Marcelo Marchi, Cesar Martins, Paulo R. Aguiar and Eduardo Bianchi

Abstract: The grinding process is widely used in surface finishing of steel parts and corresponds to one of the last steps in the manufacturing process. Thus, it’s essential to have a reliable monitoring of this process. In grinding of metals, the phenomenon of burn is one of the worst faults to be avoided. Therefore, a monitoring system able to identify this phenomenon would be of great importance for the process. Thus, the aim of this work is the monitoring of burn during the grinding process through an intelligent system that uses acoustic emission (AE) and vibration signals as inputs. Tests were performed on a surface grinding machine, workpiece SAE 1020 and aluminum oxide grinding wheel were used. The acquisition of the vibration signals and AE was done by means of an oscilloscope with a sampling rate of 2MHz. By analyzing the frequency spectra of these signals it was possible to determine the frequency bands that best characterized the phenomenon of burn. These bands were used as inputs to an artificial neural networks capable of classifying the surface condition of the part. The results of this study allowed characterizing the surface of the work piece into three groups: No burn, burn and high surface roughness. The selected neural model has produced good results for classifying the three patterns studied.

Paper Nr: 34
Title:

Coverage and Goal Searching Behaviours of a Group of Agents by a Special Single Query Roadmap - Its Benefits to Mutiple Query Roadmaps

Authors:

Ali Nasri Nazif and Mohammad Torabi Rad

Abstract: This paper tends to mainly target two different types of swarming behaviour in a 2D environment, namely area coverage and goal searching within an environment occupied with obstacles. For such behaviours, we introduce a roadmap (a tree) customized to behave well in multi-agent scenarios. We consider a variety of situations and environments, and explain how the method we have proposed comes into operation under such circumstances. A comparison is, also, made with respect to multiple query roadmaps.

Paper Nr: 37
Title:

The Non-Force Interaction Theory for Reflex System Creation with Application to TV Voice Control

Authors:

Iurii Teslia, Nataliia Popovych, Valerii Pylypenko and Oleksandr Chornyi

Abstract: The paper presents the aspects and conclusions of the theory of non-force interaction, discloses the possibilities of its application to the creation of artificial intelligence systems. The method of calculation of the reaction on the non-force actions in the sphere of intellectual activity and the universal model of intellectual reflex system are proposed. On this basis the reflex voice system for control of technical devices is developed. The article describes the system and results of its usage for controlling the TV. In particular: the special features of controlling TV’s functionality with voice commands; ignoring the commands, that are not addressed to the system; learning new commands and desired reactions on user's requests; adjusting system's behaviour based on user’s speech. The work is aimed to demonstrate the possibilities of the theory of non-force interaction in the field of study of the mechanisms of the brain, and creation on this basis artificial systems that approach in terms of its “intelligence” to human intelligence.

Paper Nr: 44
Title:

Automatic Generation of Large Knowledge Bases using Deep Semantic and Linguistically Founded Methods

Authors:

Sven Hartrumpf, Hermann Helbig and Ingo Phoenix

Abstract: Large-scale knowledge acquisition from texts is one of the challenges of the information society that can only be mastered by technical means. While the syntactic analysis of isolated sentences is relatively well understood, the problem of automatically parsing on all linguistic levels, starting from the morphological level through to the semantic level, i.e. real understanding of texts, is far from being solved. This paper explains the approach taken in this direction by the MultiNet technology in bridging the gap between the syntactic semantic analysis of single sentences and the creation of knowledge bases representing the content of whole texts. In particular, it is shown how linguistic text phenomena like inclusion or bridging references can be dealt with by logical means using the axiomatic apparatus of the MultiNet formalism. The NLP techniques described are practically applied in transforming large textual corpora like Wikipedia into a knowledge base and using the latter in meaning-oriented search engines.

Paper Nr: 50
Title:

Effective Distribution of Large Scale Situated Agent-based Simulations

Authors:

Omar Rihawi, Yann Secq and Philippe Mathieu

Abstract: Agent-based simulations have increasing needs in computational and memory resources when the the number of agents and interactions grows. In this paper, we are concerned with the simulation of large scale situated multi-agent systems (MAS). To be able to simulate several thousands or even a million of agents, it becomes necessary to distribute the load on a computer network. This distribution can be done in several ways and this paper presents two specific distributions: the first one is based on environment and the second one is based on agents. We illustrates the pros and cons of using both distribution types with two classical MAS applications: prey-predator and flocking behaviour models.

Paper Nr: 57
Title:

Targeted Linked-Data Extractor

Authors:

Pierre Maillot, Thomas Raimbault, David Genest and Stephane Loiseau

Abstract: The Linked Data Cloud is too big to be locally manipulated by standard computers and all use-cases doesn’t need to manipulate the whole cloud. To get exactly what is needed for a specific use-case, we need to obtain the specific parts from each bases of the Linked Data Cloud. This paper proposes a method to smartly extract a sub-part of the Linked Data Cloud driving by a list of resources called seeds. This method consist of extracting data starting from seed resources and recursively expanding the extraction to their neighbours.

Paper Nr: 63
Title:

Learning on Vertically Partitioned Data based on Chi-square Feature Selection and Naive Bayes Classification

Authors:

Verónica Bolón-Canedo, Diego Peteiro-Barral, Amparo Alonso-Betanzos, Bertha Guijarro-Berdiñas and Noelia Sánchez-Maroño

Abstract: In the last few years, distributed learning has been the focus of much attention due to the explosion of big databases, in some cases distributed across different nodes. However, the great majority of current selection and classification algorithms are designed for centralized learning, i.e. they use the whole dataset at once. In this paper, a new approach for learning on vertically partitioned data is presented, which covers both feature selection and classification. The approach splits the data by features, and then uses the c2 filter and the naive Bayes classifier to learn at each node. Finally, a merging procedure is performed, which updates the learned model in an incremental fashion. The experimental results on five representative datasets show that the execution time is shortened considerably whereas the classification performance is maintained as the number of nodes increases.

Paper Nr: 68
Title:

Functional Semantics for Non-prenex QBF

Authors:

Igor Stéphan

Abstract: Quantified Boolean Formulae (or QBF) are suitable to represent finite two-player games. Current techniques to solve QBF are for prenex QBF and knowledge representation is rarely in this form. We propose in this article a functional semantics for non-prenex QBF. The proposed formalism is symmetrical for validity and non-validity and allows to give different interpretations to the quantifiers. With our formalism, the solution of a non-prenex QBF is consistent with the specification, directly readable by the designer of the QBF and the locality of the knolewge is preserved.

Paper Nr: 77
Title:

Materializing Distributed Skyline Queries

Authors:

Samiha Brahimi and Mohamed-khireddine Kholladi

Abstract: In this paper, we tackle the problem of efficient skycube computation in structured P2P systems. We introduce a top-down algorithm called Distributed-Top-Sky based on the recently introduced Top-Sky. Furthermore, we introduce two types of nodes namely the Scheduling-Node where the network is organized by assigning the computation of each cuboid to a Data-Node and the Data-Node which holds a part of the dataset used to compute the assigned cuboids. In order to evaluate the effectiveness of our approach, we have conducted extensive experiments on three real datasets over a simulated CAN (content addressable network) network.

Paper Nr: 83
Title:

Scalability Analysis of mRMR for Microarray Data

Authors:

Diego Rego-Fernández, Verónica Bolón-Canedo and Amparo Alonso-Betanzos

Abstract: Lately, derived from the Big Data problem, researchers in Machine Learning became also interested not only in accuracy, but also in scalability. Although scalability of learning methods is a trending issue, scalability of feature selection methods has not received the same amount of attention. In this research, an attempt to study scalability of both Feature Selection and Machine Learning on microarray datasets will be done. For this sake, the minimum redundancy maximum relevance (mRMR) filter method has been chosen, since it claims to be very adequate for this type of datasets. Three synthetic databases which reflect the problematics of microarray will be evaluated with new measures, based not only in an accurate selection but also in execution time. The results obtained are presented and discussed.

Paper Nr: 85
Title:

Feature Selection Applied to Human Tear Film Classification

Authors:

Daniel G. Villaverde, Beatriz Remeseiro, Noelia Barreira, Manuel G. Penedo and Antonio Mosquera

Abstract: Dry eye is a common disease which affects a large portion of the population and harms their routine activities. Its diagnosis and monitoring require a battery of tests, each designed for different aspects. One of these clinical tests measures the quality of the tear film and is based on its appearance, which can be observed using the Doane interferometer. The manual process done by experts consists of classifying the interferometry images into one of the five categories considered. The variability existing in these images makes necessary the use of an automatic system for supporting dry eye diagnosis. In this research, a methodology to perform this classification automatically is presented. This methodology includes a color and texture analysis of the images, and also the use of feature selection methods to reduce image processing time. The effectiveness of the proposed methodology was demonstrated since it provides unbiased results with classification errors lower than 9%. Additionally, it saves time for experts and can work in real-time for clinical purposes.

Paper Nr: 89
Title:

A New Approach based on Cryptography and XML Serialization for Mobile Agent Security

Authors:

Hind Idrissi, Arnaud Revel and El Mamoun Souidi

Abstract: Mobile agents are a special category of software entities, with the capacity to move between nodes of one or more networks. However, they are subject to deficiency of security, related particularly to the environments on which they land or other malicious agents they may meet on their paths. Security of mobile agents is divided into two parts, the first one relates to the vulnerabilities of the host environment receiving the agent, and the second one is concerning the malevolence of the agent towards the host platform and other agents. In this paper, we will address the second part while trying to develop an hybrid solution combining the two parts. A solution for this security concern will be presented and performed .It involves the integration of cryptographic mechanisms such as Diffie-Hellman key exchange for authentication between the set (platform, agent) and the Advanced Encryption Standard (AES) to communicate the data with confidentiality. These mechanisms are associated with XML serialization in order to ensure easy and persistent portability across the network, especially for non permanent connection.

Paper Nr: 109
Title:

Surprising Recipe Extraction based on Rarity and Generality of Ingredients

Authors:

Kyosuke Ikejiri, Yuichi Sei, Hiroyuki Nakagawa, Yasuyuki Tahara and Akihiko Ohsuga

Abstract: Many surprising recipes that utilize different ingredients or cooking processes from normal recipes exist on user-generated recipe sites. The easiest way to find surprising recipes is to use the search function of the recipe sites. However, the titles of surprising recipes do not always include a keyword, such as “surprise”, or an indication that a recipe is unusual in any way. Therefore, we cannot find surprising recipes very easily. In this paper, we propose a method to extract surprising or unique recipes from those user-generated recipe sites. We propose an RF-IIF (Recipe Frequency-Inverse Ingredient Frequency) based on TF-IDF (Term Frequency- Inverse Ingredient Frequency). First, we calculate the surprising value of the ingredients by using RF-IIF. Then, we calculate the surprising value of each recipe by summing the surprising values of the ingredients that appear in a recipe. Finally, we extract recipes that have high surprising values as surprising recipes of the dish category. In the evaluation experiment, the subjects requested an evaluation about each surprising recipe. As a result, we showed that the extracted recipes were valid recipes and also had a surprising or unusual element. Therefore, we showed the usefulness of the proposed method.

Paper Nr: 115
Title:

(Semi-)Automatic Analysis of Dialogues

Authors:

Mare Koit

Abstract: We study human-human and human-computer dialogues with the aim to determine which dialogue acts and communicative strategies do the participants of interaction use, and which structural parts does a dialogue include. We develop software that makes it possible to recognise and annotate the dialogue acts, the dialogue structure and the communicative strategies. In order to recognise dialogue acts, a data-driven method is implemented when determination of the dialogue structure and the strategies is based on rules. The software tool is used by linguists in dialogue studies which further aim is to develop a dialogue system that interacts with a user in natural language following norms and rules of human-human communication. The contribution of the paper consists of integration of the existing approaches within a common platform and adaptation to the Estonian language.

Paper Nr: 117
Title:

Enhance Text Recognition by Image Pre-Processing to Facilitate Library Services by Mobile Devices

Authors:

Chuen-Min Huang, Yi-Ling Chuang, Rih-Wei Chang and Ya-Yun Chen

Abstract: Facing the popularity of web searching, libraries continuously invest in the provision of online searching and refurnish physical facilities to attract users during the past decades. In this study, we conducted a technical feasibility study to facilitate library services by applying a novel image pre-processing technique to enhance performance of OCR via mobile devices. In the binarization stage, a grayscale image is usually assigned a global threshold value to be binary, while this will not be suitable for some scenarios, such as non-uniform lightness and complicated background. Instead of segregating the grayscale image into many regions like other studies, our approach only partitioned an image into three equal-sized horizontal segments to identify the local threshold value of each segment and then restored the three segments back to the original state. The experimental results illustrate that the proposed method efficiently and effectively improves the text recognition. The accuracy rate was raised from 17.7% to 72.05% of all test images. Without counting eight unrecognizable images, the average accuracy rates of our treatment can reach 90.06%. To compare with other studies we conducted another evaluation to examine the validity of our approach. The result showed that our treatment outperforms most of the other studies and the performance achieves 74.6% in precision and 80.2% in the recall.We are confident that this design will not only bring users more convenience in using libraries but help library staff and businessmen to manage the status of books.

Paper Nr: 126
Title:

A Robot Waiter that Predicts Events by High-level Scene Interpretation

Authors:

Jos Lehmann, Bernd Neumann, Wilfried Bohlken and Lothar Hotz

Abstract: Being able to predict events and occurrences which may arise from a current situation is a desirable capability of an intelligent agent. In this paper, we show that a high-level scene interpretation system, implemented as part of a comprehensive robotic system in the RACE project, can also be used for prediction. This way, the robot can foresee possible developments of the environment and the effect they may have on its activities. As a guiding example, we consider a robot acting as a waiter in a restaurant and the task of predicting possible occurrences and courses of action, e.g. when serving a coffee to a guest. Our approach requires that the robot possesses conceptual knowledge about occurrences in the restaurant and its own activities, represented in the standardized ontology language OWL and augmented by constraints using SWRL. Conceptual knowledge may be acquired by conceptualizing experiences collected in the robot’s memory. Predictions are generated by a model-construction process which seeks to explain evidence as parts of such conceptual knowledge, this way generating possible future developments. The experimental results show, among others, the prediction of possible obstacle situations and their effect on the robot actions and estimated execution times.

Paper Nr: 128
Title:

Decomposition Tehniques for Solving Frequency Assigment Problems (FAP) - A Top-Down Approach

Authors:

Lamia Sadeg-Belkacem, Zineb Habbas, Fatima Benbouzid-Si Tayeb and Daniel Singer

Abstract: This paper deals with solving MI-FAP problem. Because of the NP-hardness of the problem, it is difficult to cope with real FAP instances with exact or even with heuristic methods. This paper aims at solving MI-FAP using a decomposition approach and mainly proposes a generic Top-Down approach. The key idea behind the generic aspect of our approach is to link the decomposition and the resolution steps. More precisely, two generic algorithms called Top-Down and Iterative Top-Down algorithms are proposed. To validate this approach two decomposition techniques and one efficient Adaptive Genetic Algorithm (AGA-MI-FAP) are proposed. The first results demonstrate good trade-off between the quality of solutions and the execution time.

Paper Nr: 130
Title:

Ontology Integration with Contextual Information

Authors:

Dan Wu

Abstract: By studying ontologies in an ontology repository, context rules are developed to improve ontology integration result. A context rule contains conditions for identifying a context. These context conditions are described by, so called, context criteria, which are, e.g., author and domain of an ontology. When the conditions, in a rule, are met, the rule is fired and the contextual information, in the body of the rule, is inserted into the reasoner, which is used for ontology integration. An example shows the construction of a context rule. The rule is used for an ontology integration. The integration result is indeed improved comparing with integration without contextual information.

Paper Nr: 145
Title:

An Aggressive Feature Selection Technique for Rule-based Text Categorization

Authors:

Salma Tayel, Stefan Agne, Andreas Dengel and Slim Abdennadher

Abstract: Feature selection is a prerequisite for all classification problems. For text categorization problems, features are mostly words, phrases, or sentences that exist in the text. The number of features affects the training time complexity. Some of the existing feature selection methods compromise the training efficiency by including a lot of, possibly irrelevant, features. The rest restrict the feature set according to some heuristic, which might result in poor accuracy. The aim of this work is to propose a selection technique that generates classifiers that are as accurate as the first half using much less features. We propose a combination of selection techniques that filters out most of the irrelevant and neutral terms producing a small accurate feature set. The rule-based algorithm, PART is used in evaluating this feature selection technique against two state-of-the-art selection approaches: uni-grams and pattern-based features. Five different data-sets are used in the evaluation. The data-sets have different sizes, domains, text length, and number of classes. The results show that the proposed technique combines the accuracy of the uni-grams approach with the feature set size of the pattern-based approach. This makes it very suitable for real world applications of text categorization.

Paper Nr: 170
Title:

Supporting Human Recollection of the Impressive Events using the Number of Photos

Authors:

Masaki Matsumoto, Sho Matsuura, Kenta Mitsuhashi and Harumi Murakami

Abstract: We present a system to support human recollection with tag clouds, which are created from keywords generated by our algorithms from the use information of Google Calendar and Twitter. The main feature of our research is to weight words using the number of photos taken by users to recall impressive events. We evaluated tag clouds by comparing our approach and a comparative approach, and our experiment results suggest the usefulness of our approach.

Paper Nr: 172
Title:

Quantum Probability in Operant Conditioning - Behavioral Uncertainty in Reinforcement Learning

Authors:

Eduardo Alonso and Esther Mondragon

Abstract: An implicit assumption in the study of operant conditioning and reinforcement learning is that behavior is stochastic, in that it depends on the probability that an outcome follows a response and on how the presence or absence of the output affects the frequency of the response. In this paper we argue that classical probability is not the right tool to represent uncertainty operant conditioning and propose an interpretation of behavioral states in terms of quantum probability instead.

Paper Nr: 175
Title:

A Multi-agent Intelligent System for on Demand Transport Problem Solving

Authors:

Mohamad El Falou and Mhamed Itmi

Abstract: In recent years, urban traffic congestion and air pollution have become huge problems in many cities in the world. A possible investment in order to reduce congestion is to increase the number of passengers in vehicles, and to decrease the number of vehicles on streets. This problem is defined as on demand transport (ODT) problem. The ODT problem environment is defined by three components: the infrastructure of the city, the vehicles and the drivers. Clients formulate requests for transportation from a pickup, to a drop off places. These requests are received and must be served on real time by the set of vehicles, which require real time environment updates. This paper is a first step to model the ODT problem as a multi-agent distributed planning problem. Our model relaxes some definitions and reduces the complexity of the ODT problem to allow better optimization.

Paper Nr: 184
Title:

Data Mining Models to Predict Patient’s Readmission in Intensive Care Units

Authors:

Pedro Braga, Filipe Portela, Manuel Filipe Santos and Fernando Rua

Abstract: Decision making is one of the most critical activities in Intensive Care Units (ICU). Moreover, it is extremely difficult for health professionals to interpret in real time all the available data. In order to improve the decision process, classification models have been developed to predict patient’s readmission in ICU. Knowing the probability of readmission in advance will allow for a more efficient planning of discharge. Consequently, the use of these models results in a lower rates of readmission and a cost reduction, usually associated with premature discharges and unplanned readmissions. In this work was followed a numerical index, called Stability and Workload Index for Transfer (SWIFT). The data used to induce the classification models are from ICU of Centro Hospitalar do Porto, Portugal. The results obtained so far, in terms of accuracy, were very satisfactory (98.91%). Those results were achieved through the use of Naïve Bayes technique. The models will allow health professionals to have a better perception on patient’s future condition in the moment of the hospital discharge. Therefore it will be possible to know the probability of a patient being readmitted into the ICU.

Paper Nr: 185
Title:

QDPSO and Minkowski Distance Applied to Transient Diagnosis System

Authors:

Andressa dos Santos Nicolau and Roberto Schirru

Abstract: When transients occur during the operation of Nuclear Power Plants (NPPs), their identification is critically important for both operational and safety reasons. Thus, plant operators have to identify an event based upon the evaluation of several distinct process variables, which might difficult operators’ actions and decisions. Transient identification systems have been proposed in order to support the analysis with the aim of achieving successful or effective courses of action, as well as to reduce the time interval for a decision and corrective actions. This paper presents a transient diagnosis system for PWR (pressurized water reactor) NPP whose optimization step of the classification algorithm is based upon the paradigm of the Quantum Computing. In this case, the optimization metaheuristic Quantum Delta Potential Swarm Optimization (QDSO) was implemented and tested. The system is able to identify anomalous events related to transients of the time series of process variables related to normal condition and three design-basis accidents. Unlike the Diagnosis System proposed in the literature, Minkowski distance was employed to calculate the similarity distance.

Paper Nr: 186
Title:

Neural Network for Fretting Wear Modeling

Authors:

Laura Haviez, Rosario Toscano, Siegfried Fourvy and Ghislain Yantio

Abstract: Materials wear is a very complex, only partially-formalized phenomenon involving numerous parameters and damage mechanisms. The need to characterize wear in many industrial applications prompted the present research. The study concerns an original strategy investigating the effect of contact conditions on the wear behavior of carburized stainless steels under fretting and reciprocating sliding motion. A physical model was constructed, and pre-treated experimental data were incorporated in a neural network to model wear volume. Three models are proposed and compared, according to input.

Paper Nr: 193
Title:

Identification of Flaming and Its Applications in CGM - Case Studies toward Ultimate Prevention

Authors:

Yuki Iwasaki, Ryohei Orihara, Yuichi Sei, Hiroyuki Nakagawa, Yasuyuki Tahara and Akihiko Ohsuga

Abstract: Nowadays, anybody can easily express their opinion publicly through Consumer Generated Media. Because of this, a phenomenon of flooding criticism on the Internet, called flaming, frequently occurs. Although there are strong demands for flaming management, a service to reduce damage caused by a flaming after one occurs, it is very difficult to properly do so in practice. We are trying to keep the flaming from happening. Concretely, we propose methods to identify a potential tweet which will be a likely candidate of a flaming on Twitter, considering public opinion among twitter users. We divide flamings into three categories: criminal episodes, struggles between conflicting values and secret exposures. The first two represent the vast majority of flaming cases. As for the CEs, a Naïve Bayes-based method has been promising to identify the cases. As for the SBCVs, we propose a dynamic P/N analysis based on daily polarity, which represents the strength of the polarity of public opinion on a given topic. An experiment using a past flaming case has shown that the method has successfully explained the case as one caused by a gap between the polarity of the tweet and that of public opinion.

Paper Nr: 195
Title:

Towards a Language for Representing and Managing the Semantics of Big Data

Authors:

Ermelinda Oro, Massimo Ruffolo, Pietro Gentile and Giuseppe Bartone

Abstract: The amount of data in our world has been exploding. Integrating, managing and analyzing large amounts of data – i.e. Big Data - will become a key issue for businesses for better operating and competing in today’s markets. Data are only useful if used in a smart way. We introduce the concept of Smart Data that is web and enterprise structured and unstructured big data with explicit and implicit semantics that leverages context to understand intent for better driving business processes and for better and more informed decisions making. This paper proposes a language able to give a representation of Big Data based on ontologies and a system that implements an approach capable to satisfy the increasing need for efficiency and scalability in semantic data management. The proposed MANTRA Language allows for: (i) representing the semantics of data by knowledge representation constructs; (ii) acquiring data from disparate heterogeneous sources (e.g. data bases, documents); (iii) integrating and managing data; (iv) reasoning and querying with Big Data. The syntax of the proposed language is partially derived from logic programming, but the semantic is completely revised. The novelty of the language we propose is that a class can be thought of as a flexible collection of structurally heterogeneous individuals that have different properties (schema-less). The language also allows executing efficient querying and reasoning for revealing implicit knowledge. These have been achieved by using a triple-based data persistency model and a scalable No-SQL storage system.

Paper Nr: 201
Title:

Belief Revision on Modal Accessibility Relations

Authors:

Aaron Hunter

Abstract: In order to model the changing beliefs of an agent, one must actually address two distinct issues. First, one must devise a model of static beliefs that accurately captures the appropriate notions of incompleteness and uncertainty. Second, one must define appropriate operations to model the way beliefs are modified in response to different events. Historically, the former is addressed through the use of modal logics and the latter is addressed through belief change operators. However, these two formal approaches are not particularly complementary; the normal representation of belief in a modal logic is not suitable for revision using standard belief change operators. In this paper, we introduce a new modal logic that uses the accessibility relation to encode epistemic entrenchment, and we demonstrate that this logic captures AGM revision. We consider the suitability of our new representation of belief, and we discuss potential advantages to be exploited in future work.

Paper Nr: 205
Title:

Domain-dependent and Observer-dependent Follow-up of Human Activity - A Normative Multi-agent Approach

Authors:

Benoît Vettier and Catherine Garbay

Abstract: We propose in this paper a novel approach for human activity follow-up that draws on a distinction between domain-dependent and observer-dependent viewpoints. While the domain-dependent (or intrinsic) viewpoint calls for the follow-up and interpretation of human activity per se, the observer-dependent (or extrinsic) viewpoint calls for a more subjective approach, which may involve an evaluative dimension, regarding the human activity or the interpretation process itself. Of interest are the mutual dependencies that tie both processes over time: the observer viewpoint is known to shape domain-dependent interpretation, while domain-dependent interpretation is core to the evolution of the observer viewpoint. We make a case for using a normative multi-agent approach to design monitoring systems articulating both viewpoints. We illustrate the proposed approach potential by examples from daily life scenarios.

Paper Nr: 212
Title:

Serious Game based on Virtual Reality and Artificial Intelligence

Authors:

Kahina Amokrane, Domitile Lourdeaux and Georges Michel

Abstract: Virtual reality is a very interesting technology for professional training. We can mention in particular the ability to simulate the activity without real danger, the flexibility in the informations’ presentation, or the exact control parameters of the simulation allows to reproduc specific situations. Today, technological maturity allows to plan increasingly a complex applications. However, in one hand, this complexity increases the difficulty, at the same time, to propose a pedagogical and narrative control (to ensure a given learning and narrative structure) and some freedom of actions (to promote the emergence of various, unique and suprised situations in order to ensure a learning-by-doing/errors). In other hand, this complexity makes difficult the tracking and understanding of learner’s path. In this paper, we propose 1- a scripting model for training virtual environment combining both a pedagogical control and the emergence of pertinent learning situations and 2- tracking of the learner’s actions, but also analysis and automatic diagnosis tools of the learner’s performances.

Paper Nr: 228
Title:

Quantitative Study on a Multiscale Approach for OCT Retinal Layer Segmentation

Authors:

A. González, C. Ortigueira, M. Ortega and M. G. Penedo

Abstract: OCT technique for retinal imaging is establishing itself as a relevant modality among ophthalmologists due to its capacity to show more information than classical modalities. Nowadays, most image processing-based applications are emerging to extract that information automatically. As previous step of any automatic method to extract features from these images, the segmentation of the retinal layers has to be done. Graph-based methods provide good results for this problem, although their efficiency is an important limitation. In this work, a multiscale or pyramidal-based approach is studied in order to solve this problem. Different configurations are proposed to determine the optimal method. It is remarkable that this approach means an improvement not only in computation time, but also in segmentation results.

Area 2 - Agents

Full Papers
Paper Nr: 26
Title:

Accurate Synchronization of Gesture and Speech for Conversational Agents using Motion Graphs

Authors:

Jianfeng Xu, Yuki Nagai, Shinya Takayama and Shigeyuki Sakazawa

Abstract: Multimodal representation of conversational agents requires accurate synchronization of gesture and speech. For this purpose, we investigate the important issues in synchronization as a practical guideline for our algorithm design through a precedent case study and propose a two-step synchronization approach. Our case study reveals that two issues (i.e. duration and timing) play an important role in the manual synchronizing of gesture with speech. Considering the synchronization problem as a motion synthesis problem instead of a behavior scheduling problem used in the conventional methods, we use a motion graph technique with constraints on gesture structure for coarse synchronization in a first step and refine this further by shifting and scaling the motion in a second step. This approach can successfully synchronize gesture and speech with respect to both duration and timing. We have confirmed that our system makes the creation of attractive content easier than manual creation of equal quality. In addition, subjective evaluation has demonstrated that the proposed approach achieves more accurate synchronization and higher motion quality than the state-of-the-art method.

Paper Nr: 54
Title:

Synthesis and Abstraction of Constraint Models for Hierarchical Resource Allocation Problems

Authors:

Alexander Schiendorfer, Jan-Philipp Steghöfer and Wolfgang Reif

Abstract: Many resource allocation problems are hard to solve even with state-of-the-art constraint optimisation software upon reaching a certain scale. Our approach to deal with this increasing complexity is to employ a hierarchical “regio-central” mechanism. It requires two techniques: (1) the synthesis of several models of agents providing a certain resource into a centrally and efficiently solvable optimisation problem and (2) the creation of an abstracted version of this centralised model that reduces its complexity when passing it on to higher layers. We present algorithms to create such synthesised and abstracted models in a fully automated way and demonstrate empirically that the obtained solutions are comparable to central solutions but scale better in an example taken from energy management.

Paper Nr: 58
Title:

The Role of Communication in Coordination Protocols for Cooperative Robot Teams

Authors:

Changyun Wei, Koen Hindriks and Catholijn M. Jonker

Abstract: We investigate the role of communication in the coordination of cooperative robot teams and its impact on performance in search and retrieval tasks. We first discuss a baseline without communication and analyse various kinds of coordination strategies for exploration and exploitation. We then discuss how the robots construct a shared mental model by communicating beliefs and/or goals with one another, as well as the coordination protocols with regard to subtask allocation and destination selection. Moreover, we also study the influence of various factors on performance including the size of robot teams, the size of the environment that needs to be explored and ordering constraints on the team goal. We use the Blocks World for Teams as an abstract testbed for simulating such tasks, where the team goal of the robots is to search and retrieve a number of target blocks in an initially unknown environment. In our experiments we have studied two main variations: a variant where all blocks to be retrieved have the same color (no ordering constraints on the team goal) and a variant where blocks of various colors need to be retrieved in a particular order (with ordering constraints). Our findings show that communication increases performance but significantly more so for the second variant and that exchanging more messages does not always yield a better team performance.

Paper Nr: 66
Title:

Agent-based Simulation of the German and French Wholesale Electricity Markets - Recent Extensions of the PowerACE Model with Exemplary Applications

Authors:

Andreas Bublitz, Philipp Ringler, Massimo Genoese and Wolf Fichtner

Abstract: Given electricity markets’ complexity, model-based analysis has proven to be a valuable tool for decision makers in related industries or politics. Among the different modelling techniques for electricity markets, agent-based modelling offers specific advantages. In this paper, the detailed agent-based simulation model for the wholesale electricity market, PowerACE, is presented with its latest extensions. The model integrates the short-term perspective of daily electricity trading and long-term capacity expansion planning. Various market elements are simulated including the day-ahead market as well as the coupling of different market areas with limited interconnection capacities. Strategic behaviour of the main supply-side agents is taken into account. The model has already been applied to various research questions regarding the development of electricity markets and the behaviour of market participants. In this contribution, exemplary results for the market coupling of the German and French wholesale electricity market are shown. In the future, due to the current developments in the electricity markets, the PowerACE modelling framework is to be extended by various aspects including the simulation of an intraday market and the integration of different aspects of uncertainty which becomes necessary given current developments in the electricity markets.

Paper Nr: 87
Title:

Decentralized Computation of Pareto Optimal Pure Nash Equilibria of Boolean Games with Privacy Concerns

Authors:

Sofie De Clercq, Kim Bauters, Steven Schockaert, Mihail Mihaylov, Martine De Cock and Ann Nowe

Abstract: In Boolean games, agents try to reach a goal formulated as a Boolean formula. These games are attractive because of their compact representations. However, few methods are available to compute the solutions and they are either limited or do not take privacy or communication concerns into account. In this paper we propose the use of an algorithm related to reinforcement learning to address this problem. Our method is decentralized in the sense that agents try to achieve their goals without knowledge of the other agents’ goals. We prove that this is a sound method to compute a Pareto optimal pure Nash equilibrium for an interesting class of Boolean games. Experimental results are used to investigate the performance of the algorithm.

Paper Nr: 106
Title:

Cooperatively Transporting Unknown Objects using Mobile Agents

Authors:

Ryo Takahashi, Munehiro Takimoto and Yasushi Kambayashi

Abstract: This paper presents an algorithm for cooperatively transporting objects by multiple robots without any initial knowledge. The robots are connected by communication networks, and the controlling algorithm is based on the pheromone communication of social insects such as ants. Unlike traditional pheromone based cooperative transportation, we have implemented the pheromone as mobile software agents that control the mobile robots corresponding to the ants. The pheromone agent has the vector value pointing to its birth location inside, which is used to guide a robot to the birth location. Since the pheromone agent can diffuse with migrations between robots as well as a physical pheromone, it can attract other robots scattering in a work field to the birth location. Once the robot finds an object, it briefly pushes the object, measuring the degree of the inclination of the object. The robot generates a pheromone agent with the vector value to pusing point suitable for suppressing the inclination of the object. The process of the pushes and generations of pheromone agents enables the efficient transportation of the object. We have implemented a simulator based on our algorithm, and conducted experiments to demonstrate the feasibility of our approach.

Paper Nr: 111
Title:

Design of Material-integrated Distributed Data Processing Platforms with Mobile Multi-agent Systems in Heterogeneous Networks

Authors:

Stefan Bosse

Abstract: An agent processing platform suitable for distributed computing in sensor networks consisting of low-resource (e.g., material-integrated) nodes is presented, providing a unique distributed programming model and enhanced robustness of the entire heterogeneous environment in the presence of node, sensor, link, data processing, and communication failures. In this work multi-agent systems with mobile activity-based agents are used for sensor data processing in unreliable mesh-like networks of nodes, consisting of a single microchip with limited low computational resources. The agent behaviour, interaction, and mobility (between nodes) can be efficiently integrated on the microchip using a configurable pipelined multi-process architecture based on Petri-Nets. Additionally, software implementations and simulation models with equal functional behaviour can be derived from the same source model. Hardware and software platforms can be directly connected in heterogeneous networks. Agent interaction and communication is provided by a simple tuple-space database and signals providing remote inter-node level communication and interaction. A reconfiguration mechanism of the agent processing system offers activity graph changes at run-time.

Paper Nr: 134
Title:

A Method for Semi-automatic Explicitation of Agent’s Behavior - Application to the Study of an Immersive Driving Simulator

Authors:

Kévin Darty, Julien Saunier and Nicolas Sabouret

Abstract: This paper presents a method for evaluating the credibility of agents’ behaviors in immersive multi-agent simulations. It combines two approaches. The first one is based on a qualitative analysis of questionnaires filled by the users and annotations filled by others participants to draw categories of users (related to their behavior in the context of the simulation or in real life). The second one carries out a quantitative behavior data collection during simulations in order to automatically extract behavior clusters. We then study the similarities between user categories, participants’ annotations and behavior clusters. Afterward, relying on user categories and annotations, we compare human behaviors to agent ones in order to evaluate the agents’ credibility and make their behaviors explicit. We illustrate our method with an immersive driving simulator experiment.

Paper Nr: 135
Title:

Contour-Net - A Model for Tactile Contour-tracing and Shape-recognition

Authors:

André Frank Krause, Thierry Hoinville, Nalin Harischandra and Volker Dürr

Abstract: We propose Contour-Net as a bio-inspired model for rhythmic movement control of a pair of insectoid feelers, able to successively sample the contour of arbitrarily shaped objects. Initial object contact initiates a smooth transition from a large-amplitude, low-frequency searching behaviour to a local, small-amplitude and high frequency sampling behaviour. Both behavioural states are defined by the parameters of a Hopf Oscillator. Subsequent contact signals trigger a 180º phase-forwarding of the oscillator, resulting in repeated sampling of the object. The local sampling behaviour effectively serves as a contour-tracing method with high robustness, even for complicated shapes. Collected contour data points can be directly fed into an artificial neural network to classify the shape of an object. Given a sufficiently large training dataset, tactile shape recognition can be achieved in a position-, orientation- and size-invariant manner. Only minimal pre-processing (normalisation) of contour data points is required.

Short Papers
Paper Nr: 31
Title:

Importance of Considering User’s Social Skills in Human-agent Interactions - Is Performing Self-adaptors Appropriate for Virtual Agents?

Authors:

Tomoko Koda and Hiroshi Higashino

Abstract: Self-adaptors are bodily behaviours that often involve self-touch that is regarded as taboo in public. However, self-adaptors also occur during casual conversations between friends. We developed a virtual agent that exhibits self-adaptors during conversation with users. Our continuous evaluation of the interaction between the agents that exhibit self-adaptors and without indicated that there is a dichotomy on the impression on the agents between users with high social skills and those with low skills. People with high social skills feel more friendliness toward an agent that exhibits self-adaptors than those with low social skills. The result suggests the need to tailor non-verbal behaviour of virtual agents according to user’s social skills.

Paper Nr: 45
Title:

IAAN: Intelligent Animated Agent with Natural Behaviour for Online Tutoring Platforms

Authors:

Helen V. Diez, Sara García, Jairo R. Sánchez, Maria del Puy Carretero and David Oyarzun

Abstract: The goal of the work presented in this paper is to develop an Intelligent Animated Agent with Natural Behaviour (IAAN). This agent is integrated into e-learning platforms in order to perform the role of an online tutor. The system stores into a database personalized information of each student regarding their level of education, their learning progress and their interaction with the platform. This information is then used by the 3D modeled virtual agent to give personalized feedback to each student; the purpose of the agent is to guide the students throughout the lectures taking into account their personal needs and interacting with them by means of verbal and non-verbal communication. To achieve this work a thorough study of natural behaviour has been held and a complex state machine is being developed in order to provide IAAN with the sufficient artificial intelligence as to enhance the students motivation and engagement with the learning process.

Paper Nr: 91
Title:

Introducing Mobility into Agent Coordination Patterns

Authors:

Sergio Esparcia and Ichiro Satoh

Abstract: This paper proposes coordination patterns that support matchmaking, communication and interaction among mobile agents in addition to stationary ones. Mobile agent technology is a powerful implementation technique of distributed systems, but we need to manage migrations of agents, including their current and destination locations. The proposed patterns enable us to define coordination between mobile agents or between mobile and stationary agents without explicitly knowing their migrations between locations. They are mostly based on the Knowledge Query and Manipulation Language, one of the most extended Agent Communication Languages, but also new patterns are proposed. Additionally, a case of study about tourism is presented.

Paper Nr: 101
Title:

Towards Simulating Heterogeneous Drivers with Cognitive Agents

Authors:

Arman Noroozian, Koen V. Hindriks and Catholijn M. Jonker

Abstract: Every driver behaves differently in traffic. However, when it comes to micro-simulation of drivers with a high level of detail no framework manages to model the complexities of various driving styles as well as scale up to larger simulations. We propose a framework of micro-simulation combined with cognitive agents to facilitate such simulation tasks. Our goal is to (i) model individual drivers, and (ii) use this framework for the purpose of simulating realistic highway traffic with heterogeneous driving styles. The challenge is therefore to create a framework that facilitates such complex modeling and supports large scale simulations. We evaluate the framework from two perspectives. First, the ability to represent, model and simulate dissimilar drivers in addition to study and compare emerging behavior. Second, the scalability of the framework. We report on our experiences with the framework, outline several challenges and identify future areas for development.

Paper Nr: 129
Title:

To Calibrate & Validate an Agent-based Simulation Model - An Application of the Combination Framework of BI Solution & Multi-agent Platform

Authors:

Thai Minh Truong, Frédéric Amblard, Benoit Gaudou and Christophe Sibertin Blanc

Abstract: Integrated environmental modeling approaches, especially the agent-based modeling one, are increasingly used in large-scale decision support systems. A major consequence of this trend is the manipulation and generation of huge amount of data in simulations, which must be efficiently managed. Furthermore, calibration and validation are also challenges for Agent-Based Modelling and Simulation (ABMS) approaches when the model has to work with integrated systems involving high volumes of input/output data. In this paper, we propose a calibration and validation approach for an agent-based model, using a Combination Framework of Business intelligence solution and Multi-agent platform (CFBM). The CFBM is a logical framework dedicated to the management of the input and output data in simulations, as well as the corresponding empirical datasets in an integrated way. The calibration and validation of Brown Plant Hopper Prediction model are presented and used throughout the paper as a case study to illustrate the way CFBM manages the data used and generated during the life-cycle of simulation and validation.

Paper Nr: 158
Title:

Evacuation Simulation through Formal Emotional Agent based Modelling

Authors:

Ilias Sakellariou, Petros Kefalas and Ioanna Stamatopoulou

Abstract: Evacuation Simulation is recognised as an important tool for assessing design choices for urban areas. Although a number of approaches have been introduced, it is widely acceptable that such simulation scenarios demand modelling of emotional aspects of evacuees, and how these affect their behaviour. The present work, proposes that formal agent modelling based on eX-machines can rigorously define but also naturally lead to realistic simulations of such scenarios. eX-machines can model agent behaviour influenced by emotions, including social aspects of emotions, such as emotion contagion. The developed formal model is refined to simulation code, that is able to visualise and simulate crowd believable behaviour.

Paper Nr: 179
Title:

A Pattern based Modelling for Self-organizing Multi-agent Systems with Event-B

Authors:

Zeineb Graja, Frederic Migeon, Christine Maurel, Marie-Pierre Gleizes, Linas Laibinis, Amira Regayeg and Ahmed Hadj Kacem

Abstract: Self-Organizing Multi-Agent Systems (SO-MAS) are defined as a set of autonomous entities called agents interacting together in order to achieve a given task. Generally, the development process of these systems is based on the bottom-up approach which focuses on the design of the entities individual behavior. The main question arising when developing SO-MAS is how to insure that the designed entities, when interacting together, will give rise to the desired behavior? Our proposition to deal with this question is to use formal methods. We propose a correct by construction method for systematic design of SO-MAS based on the use of design patterns and formal stepwise refinements. Our work gives guidelines to assist the designer when developing the individual behavior of the entities and prove its correctness at the early stages of the design process. The method is illustrated with the foraging ants’ case study.

Paper Nr: 191
Title:

Supporting Distant Human Collaboration under Tangible Environments - A Normative Multiagent Approach

Authors:

Fabien Badeig and Catherine Garbay

Abstract: The purpose of this paper is to present a new approach to support distant human collaboration under tangible environments. Our aim is not to build and transmit across the distant tables an accurate and complete description of the human activity. Rather, our choice is to restrict communication to the possibilities offered by the tangible tables (tangible object moves and virtual feedback). In this context, we propose to focus on the elicitation and sharing of the norms and conventions that frame human activity, a core issue to sustain proper collaboration. We promote in this perspective the design of a normative multiagent system, whose goal is to emulate the influence of these norms on distant cooperation, thus bringing mutual awareness to the human partners. The role of such system is (i) to represent these potentially heterogeneous and evolving systems of norms in a declarative and distributed way, (ii) to filter the interpretation and communication of human activity according to these norms, and (iii) to build an informed virtual feedback providing information about the conformity of action with respect to the conventions. An application to the RISK game is presented to exemplify the proposed approach.

Paper Nr: 203
Title:

Avatar-based Macroeconomics - Experimental Insights into Artificial Agents Behavior

Authors:

Gianfranco Giulioni, Edgardo Bucciarelli, Marcello Silvestri and Paola D'Orazio

Abstract: In this paper we present a new methodological approach based on the interplay between Experimental Economics and Agent-based Economics. Advances in the design and implementation of individual autonomous economic agents are presented. The methodology is organized in three steps. The first step focuses on agents. We use an inductive rather than a deductive approach: by means of the experimental method we observe agents’ behaviors. The second step is the behavioral rules’ building process that allows us to study how to estimate and structure artificial agents. In the third step, the set of previously induced behavioral rules are used to build artificial agents, i.e. “molded” avatars, which operate in the “archetype” macroeconomic system. The resulting Multi-agent system serves as the macroeconomic environment for our simulations and economic policy analysis.

Paper Nr: 213
Title:

Identifying Emotion in Organizational Settings - Towards Dealing with Morality

Authors:

Terán Oswaldo, Christophe Sibertin-Blanc and Benoit Gaudou

Abstract: Emotions play an essential role in the behaviour of human beings, either at their sudden occurrence or by the continuous care to prevent the occurrence of unpleasant ones and to search for the occurrence of pleasant ones. Notably, in any system of collective action, they influence the behaviours of the actors with respect to each others. SocLab is a framework devoted to the study of the functioning of social organizations, through the agent-based modelling of their structure and the simulation of the processes by which the actors adjust their behaviours the one to another and so regulate the organization. This position paper shows how SocLab enables to characterize the configurations of an organization that are likely to arouse different kinds of social emotions in the actors, in order to cope with the emotional dimension of their behaviours. The case of a concrete organization is introduced to illustrate this approach and its usefulness for a deeper understanding of the functioning of organizations.

Paper Nr: 216
Title:

Improving Proceeding Test Case Prioritization with Learning Software Agents

Authors:

Sebastian Abele and Peter Göhner

Abstract: Test case prioritization is an important technique to improve the planning and management of a system test. The system test itself is an iterative process, which accompanies a software system during its whole life cycle. Usually, a software system is altered and extended continuously. Test case prioritization algorithms find and order the most important test cases to increase the test efficiency in the limited test time. Generally, the knowledge about a system’s characteristics grows throughout the development. With better experience and more empirical data, the test case prioritization can be optimized to rise the test efficiency. This article introduces a learning agent-based test case prioritization system, which improves the prioritization automatically by drawing conclusions from actual test results.

Paper Nr: 224
Title:

Hierarchical HMM-based Failure Isolation for Cognitive Robots

Authors:

Dogan Altan and Sanem Sariel-Talay

Abstract: Robots execute their planned actions in the physical world to accomplish their goals. However, since the real world is partially observable and dynamic, failures may occur during the execution of their actions. These failures should be detected immediately, and the underlying reasons of these failures should be isolated to ensure robustness. In this paper, we propose a probabilistic and temporal model-based failure isolation method that maintains Hierarchical Hidden Markov Models (HHMMs) in order to represent and reason about different failure types. The underlying reason of a failure can be isolated efficiently by multi-hypothesis tracking.

Paper Nr: 229
Title:

Trading Experiments using Financial Agents in a Simulated Cloud Computing Commodity Market

Authors:

John Cartlidge

Abstract: In September 2012, Amazon, the leading Infrastructure as a Service (IaaS) provider, launched a secondary marketplace venue for users to buy and sell cloud resources between themselves—the Amazon EC2 Reserved Instance Marketplace (ARIM). ARIM is designed to encourage users to purchase more long-term reserved instances, thus generating more stable demand for the provider and additional revenue through commission on sales. In this paper, we model ARIM using a multi-agent simulation model populated with zero-intelligence plus (ZIP) financial trading agents. We demonstrate that ARIM offers a new opportunity for market makers (MMs) to profit from buying and selling resources, but suggest that this opportunity may be fleeting. We also demonstrate that altering the market mechanism from a retail market (where only sellers post offers; similar to ARIM) to a continuous double auction (where both buyers and sellers post offers) can result in higher sale prices and therefore higher commissions. Since IaaS is a multi-billion dollar industry and currently the fastest growing segment of the cloud computing market, we therefore suggest that Amazon may profit from altering the mechanism of ARIM to enable buyers to post bids.

Posters
Paper Nr: 28
Title:

Asynchronous Argumentation with Pervasive Personal Communication Tools

Authors:

Yuki Katsura, Hajime Sawamura, Takeshi Hagiwara and Jacques Riche

Abstract: In this paper, we propose an argument-based communication tool for humans and agents, which supplements and alternates the current communication system such as Twitter, Line, etc. in order to allow us to make a more deliberate and logical human communication. For this purpose, we devised asynchronous argumentation based on our logic of multiple-valued argumentation. It may be as well reworded as asymptotic or incremental argumentation since agents could approach towards truth or justification every time argument is put forward by an agent. We have made real the asynchronous argumentation system, named PIRIKA (pilot of the right knowledge and argument), on the pervasive personal tool, iPad. Finally some lessons learned from the experimental uses of PIRIKA are reported.

Paper Nr: 95
Title:

A Multi-level Model for Multi-agent based Simulation

Authors:

Thomas Huraux, Nicolas Sabouret and Yvon Haradji

Abstract: In this paper, we consider the problem of modeling complex systems at several levels of abstraction. We design SIMLAB, a multi-level model for multi-agent based simulation. Our approach is based on the coexistence of different levels during simulation to enhance the model with complementary experts’ opinion. We present how a same concept can be defined independently of its granularity using the notion of modeling axis. We consider recursive agents with interactions and influences which captures the inter-level dynamics. We also propose observations to detect and to reify macroscopic entities.

Paper Nr: 120
Title:

Self-Optimizing Algorithms for Mobile Ad Hoc Networks based on Multiple Mobile Agents

Authors:

Yasushi Kambayashi, Tatsuya Shinohara and Munehiro Takimoto

Abstract: This paper presents algorithms that form optimal connecting configurations for Mobile Ad Hoc Networks (MANETs). MANET is a computer network that is dynamically formed by autonomous mobile nodes. Today, the communication network is one of the most important infrastructures. When it is lost by either natural or accidental disaster, the recovery of the communication network should be one of the first priorities. We are proposing a way of constructing an extemporized communication network on the spot by a herd of mobile robots that communicate by wireless link. The networks we are considering are formed by multiple relay robots; therefore the algorithms are naturally distributed ones and executed by the herd of relay robots. The relay robots move cooperatively but without any central control. In order to collect and to distribute enough information to coordinate the behaviours of participating relay robots, we employ mobile software agents that we have developed and succeeded in using many applications. There are a number of multi-robot systems that take advantage of MANET, and look for efficient use of relay robot while maintaining connectivity. Our study contributes this line of investigation. The numerical experiments show that our algorithms provide optimal configurations in certain cases.

Paper Nr: 127
Title:

An Agent-based Approach for Smart Energy Grids

Authors:

Alba Amato, Beniamino Di Martino, Marco Scialdone and Salvatore Venticinque

Abstract: The increasing demand for energy and the availability of several solutions of renewable energy sources has stimulated the formulation of plans aiming at expanding and upgrading existing power grids in several countries. According to NIST, smart grid will be one of the greatest achievements of the 21st century. By linking information technologies with the electric power grid to provide electricity with a brain, the smart grid promises many benefits, including increased energy efficiency, reduced carbon emissions, and improved power reliability. In this paper we present an agent based architecture for supporting collection and processing of information about local energy production and storage resources of neighborhoods of individual houses and to schedule the energy flows using negotiation protocols.

Paper Nr: 155
Title:

Complete Distributed Search Algorithm for Cyclic Factor Graphs

Authors:

Toshihiro Matsui and Hiroshi Matsuo

Abstract: Distributed Constraint Optimization Problems (DCOPs) have been studied as fundamental problems in multiagent systems. The Max-Sum algorithm has been proposed as a solution method for DCOPs. The algorithm is based on factor graphs that consist of two types of nodes for variables and functions. While the Max-Sum is an exact method for acyclic factor-graphs, in the case that the factor graph contains cycles, it is an inexact method that may not converge. In this study, we propose a method that decomposes the cycles based on crossedged pseudo-trees on factor-graphs. We also present a basic scheme of distributed search algorithms that generalizes complete search algorithms on the constraint graphs and Max-Sum algorithm.

Paper Nr: 160
Title:

Towards a Metric for Confidence in Identity - An Agent based Approach

Authors:

Brian A. Soeder and K. Suzanne Barber

Abstract: Determining Identity of a person or system can be a difficult task given the size and complexity of the space. Automated agents can assist Identity providers in their efforts to verify a user’s identity before issuing a “credential” (e.g. username, email, ID#, etc.) required to participate in the given network. This paper describes an algorithm designed to contribute additional confidence to an Identity used in distributed interactions. Despite currently available best efforts to guarantee the veracity of these credentials, there are still gaps exemplified in use of identities for compromise. This is a critical problem to distributed online interactions. By defining an approach to gain confidence in the Identity of each user in the network, the entire large-scale network can be made more secure.

Paper Nr: 161
Title:

Conflict Resolution of Production-marketing Collaborative Planning based on Multi-Agent Self-adaptation Negotiation

Authors:

Hao Li, Ting Pang, Yuying Wu and Guorui Jiang

Abstract: In order to overcome the lack of adaptability and learning ability of traditional negotiation, we regard supply chain production-marketing collaborative planning negotiation as the research object, design one five-elements negotiation model, adopt a negotiation strategy based on Q-reinforcement learning, and optimize the negotiation strategy by the RBF neural network and predict the information of opponent for adjusting the concession extent. At last, we give a sample that verifies the negotiation strategy can enhance the ability of the negotiation Agents, reduce the negotiation times, and improve the efficiency of resolving the conflicts of production-marketing collaborative planning, comparing to the un-optimized Q-reinforcement learning.

Paper Nr: 162
Title:

Modeling Self-interested Information Diffusion with Game Theory on Graphs

Authors:

Jeffrey Hudack, Nathaniel Gemelli and Jae Oh

Abstract: We model information diffusion through social networks using a game-theoretic paradigm. Our work focuses on the pairwise interactions between individuals and their social contacts, allowing each agent to make local decisions to maximize individual gain. This fully distributed approach is driven only by local utility and differs from many existing models that treat diffusion as a network process that occurs passively. Agents are inherently selfish, acting only to benefit from obtaining new information and from providing contacts with information that is new to them. Framed using game theory on graphs, we present a model that allows for parameterization of individual preference and models of pairwise interaction. We observe the effects of graph structure, incomplete information, and sharing cost on the model. We show that spatially organized graphs, due to their degree distribution, are much more resilient to higher costs of sharing. Additionally, we show how incomplete information often leads to more active agents at the cost of individual payoff. Finally, we provide insight into a number of extensions to this model that will allow for simulation of various diffusion phenomenon.

Paper Nr: 173
Title:

Multiagent Approach for Effective Disaster Evacuation

Authors:

Yasuki Iizuka, Katsuya Kinoshita and Kayo Iizuka

Abstract: At times of disaster, or immediately prior to such periods, smooth evacuation is a key issues. However, it is difficult to achieve, because people tend to panic when faced with disaster. This paper proposes a system that supports effective evacuation from danger using the framework of the Distributed Constraint Optimization Problem (DCOP). The use of the DCOP facilitates the assisted optimization of people’s evacuation timing without a center server. This system enables assistance in terms of evacuation guidance to be given to relieve congestion, by calculating evacuation timing via an ad-hoc network of evacuees’ mobile devices (phones, PCs, etc.). In this paper, we focus on the formalization of the disaster evacuation problem and how to solve it using the framework of the Distributed Constraint Optimization Problem.

Paper Nr: 181
Title:

Massive Data Flows - Self-organization of Energy, Material, and Information Flows

Authors:

Takashi Ikegami and Mizuki Oka

Abstract: As opposed to “Big Data” as a buzz word, we attempt to find a new pattern or structure generated by selforganization in the flow of the massive data. We call this approach Massive Data Flows (MDF). Rather than making use of “Big Data”, we are interested in the new phenomena and theory that allows us to deal with the data without losing the autonomy, complexity, dynamics and structure that the data itself has. MDF is a generic term used to identify a new kind of system dynamics: self-organization in complex open environments. Composed of many interacting heterogeneous elements,MDF systems exhibit self-referential, self-modifying, and self-sustaining dynamics, that can enable door-opening innovation. While the web may be the best example of an MDF system, the concept is generic to natural/artificial systems such as brains, cells, markets and ecosystems. In this paper, we exemplify five systems; the default mode network and the excitability of the web, the autonomous sensor network, chemical oil droplets, and court and cave computation with a many-core system as potential MDF systems.

Paper Nr: 189
Title:

Parallel Possibility Results of Preference Aggregation and Strategy-proofness by using Prolog

Authors:

Kenryo Indo

Abstract: Classical social choice theory provides axiomatic modeling for collective decision making in multi-agent situations as functions of a set of profiles (i.e., tuples of transitive orderings). The celebrated Arrow’s impossibility theorem (for unanimity-and-independence-obeying preference aggregation) and the Gibbard–Satterthwaite theorem (for strategy-proof voting procedures) assume the unrestricted domain as well as the transitivity of orderings. This paper presents a distribution map of all Arrow-type aggregation rules without the unrestricted domain axiom for the two-individual three-alternative case in parallel with non-imposed strategy-proof voting procedures by using a Prolog program that systematically removes profiles in the super-Arrovian domains.

Paper Nr: 192
Title:

A Multi-level Model of Motivations and Valuations for Cognitive Agents

Authors:

Samer Schaat, Klaus Doblhammer and Dietmar Dietrich

Abstract: In developing cognitive agents using a functional model of the human mind as their decision unit, a model of motivations and valuations is needed as the basis for the agents’ decision making. This enables agents to cope with their internal and external world while pursuing their own agenda. We show that a technical model based on the psychoanalytical drive concept and Damasio’s neuro-biological findings is appropriate for human-inspired cognitive agents. In particular, after overcoming the hurdles of interdisciplinary work between hermeneutic and axiomatic approaches, a transformation of psychoanalytical and neuro-biological concepts into an consistent and deterministic model solves the problem of motivations and valuations in artificial cognitive agents. This multi-level model is presented, in which multiple principles and influences of valuation are used to incrementally generate and decide an agenda for the agent’s behavior.

Paper Nr: 196
Title:

An Agent-oriented Ground Vehicle's Automation using Jason Framework

Authors:

Reydson Schuenck Barros, Victor Hugo Heringer, Carlos Eduardo Pantoja, Nilson Mori Lazarin and Leonardo Machado de Moraes

Abstract: This paper proposes an agent-oriented ground vehicle automation that uses low-cost hardware. The vehicle's platform consists in a group of hardware and software layers that acts with the Jason programming language for unmanned vehicles automation. This paper also presents a methodology with four programming layers to facilitate the hardware integration and implementation. To validate and demonstrate the platform an unmanned ground vehicle was constructed using an ATMEGA328 microcontroller, a library for serial communication and a six-function remote controlled vehicle. The vehicle is able to move from one point to another based on its global position.

Paper Nr: 197
Title:

Information Dissemination in Social Networks

Authors:

Jiří Jelinek

Abstract: Social networks are currently one of the most studied structures for information and knowledge exchange. These structures are very well described in terms of their static structure, this article attempts to propose the model of their dynamic behavior and the spread of information and knowledge in these networks. The heuristic event based model of the individual behavior in the network using the message passing will be presented. The main idea of the model is agent’s need for information and knowledge in specific situations. On that is based multiagent model of the social network used for information exchange, which has been practically implemented and will be presented as well as some of its simulation outputs for tasks testing the dynamics of social networks.

Paper Nr: 204
Title:

Influence of Norms on Decision Making in Trusted Desktop Grid Systems - Making Norms Explicit

Authors:

Jan Kantert, Lukas Klejnowski, Yvonne Bernard and Christian Müller-Schloer

Abstract: In a Trusted Desktop Grid agents cooperate to improve the speedup of their calculations. Since it is an open system, the behaviour of agents can not be foreseen. In previous work we have introduced a trust metric to cope with this information uncertainty. Agents rate each other and exclude undesired behaving agents. Previous implementations of agents used hardcoded logic for those ratings. In this paper, we propose an approach to convert implicit rules to explicit norms. This allows learning agents to understand the expected behavior and helps us to provide an improved reaction to attacks by changing norms.

Paper Nr: 226
Title:

Introduction for Instructions Hetero Sensitivity of Pheromone with Ant Colony Optimization

Authors:

Hisayuki Sasaoka

Abstract: We have known that Ant Colony System (ACS) is one of powerful meta-heuristics and some researchers have reported the effectiveness of some applications using the algorithm. On the other hand, we have known that the algorithms have some problems when we employed it in multi-agent system and we have proposed a new method which is based on Max-Min Ant System (MM-AS), which is improved on ACS. This paper describes results of evaluation experiments with agents implemented our proposed method. In these experiments, we have prepared some different types of agents, which have hetero sensitivity of pheromone. The pheromones are deposited by agents and they help to search the shortest path for agents. The reason that we employ the agents are inspired by the report by researcher in the field of biology. Then we have prepared some conditions for RoboCup Rescue Simulation system (RCRS). To confirm the effectiveness, we have considered agents’ action in the simulation system.