ICAART 2015 Abstracts


Conference

ICAART 2015

Area 1 - Artificial Intelligence

Full Papers
Paper Nr: 9
Title:

A Modal Logic for the Decision-Theoretic Projection Problem

Authors:

Gavin Rens

Abstract: We present a decidable logic in which queries can be posed about (i) the degree of belief in a propositional sentence after an arbitrary finite number of actions and observations and (ii) the utility of a finite sequence of actions after a number of actions and observations. Another contribution of this work is that a POMDP model specification is allowed to be partial or incomplete with no restriction on the lack of information specified for the model. The model may even contain information about non-initial beliefs. Essentially, entailment of arbitrary queries (expressible in the language) can be answered. A sound, complete and terminating decision procedure is provided.

Paper Nr: 11
Title:

Entry Point Matters - Effective Introduction of Innovation in Social Networks

Authors:

Ramon Hermoso and Maria Fasli

Abstract: Social networks have grown massively in the last few years and have become a lot more than mere message exchange platforms. Apart from serving purposes such as linking friends and family, job linking or news feeding, their nearly pervasive nature and presence in day-to-day activities make them the biggest potential market and access platform to hundreds of millions of customers ever built. Faced with such a complex and challenging environment, we claim that introducing innovation in an efficient way in such networks is of extreme importance. In this paper, we put forward a mechanism to select suitable entry points in the network to introduce the innovation, so fostering its acceptance and enhancing its diffusion. To do this, we use the underlying structure of the network as well as the influencing power some users exercise over others. We present results of testing our approach with both a Facebook dataset and different examples of random networks.

Paper Nr: 17
Title:

How to Decrease and Resolve Inconsistency of a Knowledge Base?

Authors:

Dragan Doder and Srdjan Vesic

Abstract: This paper studies different techniques for measuring and decreasing inconsistency of a knowledge base. We define an operation that allows to decrease inconsistency of a knowledge base while losing a minimal amount of information. We also propose two different ways to compare knowledge bases. The first is a partial order that we define on the set of knowledge bases. We study this relation and identify its link with a particular class of inconsistency measures. We also study the links between the partial order we introduce and information measures. The second way we propose to compare knowledge bases is to define a class of metrics that give us a distance between knowledge bases. They are based on symmetric set difference of models of pairs of formulae from the two sets in question.

Paper Nr: 24
Title:

Critical Position Identification in Games and Its Application to Speculative Play

Authors:

Mohd Nor Akmal Khalid and Umi Kalsom Yusof

Abstract: Research in two-player perfect information games have been one of the focus of computer-game related studies in the domain of artificial intelligence. However, focus on an effective search program is insufficient to give the “taste” of actual entertainment in the gaming industry. Instead of focusing on effective search algorithm, we dedicate our study in realizing the possibility of applying speculative play. However, quantifying and determining this possibility is the main challenge imposed in this study. For this purpose, the Conspiracy Number Search algorithm is considered where the maximum and minimum conspiracy numbers are recorded in the test bed of a simple Tic-Tac-Toe game application. We analyze these numbers as the measures of critical position identifier which determines the right moment for possibility of applying speculative play through operators formally defined in this article as ↑ tactic and ↓ tactic. Interesting results are obtained with convincing evidences but further works are still needed in order to prove our hypothesis.

Paper Nr: 27
Title:

Inconsistency and Sequentiality in LTL

Authors:

Norihiro Kamide

Abstract: Inconsistency-tolerant temporal reasoning with sequential (ordered or hierarchical) information is of gaining increasing importance in the areas of computer science applications such as medical informatics. A logical system for representing such reasoning is required for obtaining a theoretical basis for such applications. In this paper, a new logic called a paraconsistent sequential linear-time temporal logic (PSLTL) is introduced extending the standard linear-time temporal logic (LTL). PSLTL can appropriately represent inconsistency-tolerant temporal reasoning with sequential information. The cut-elimination, complexity and completeness theorems for PSLTL are proved as the main results of this paper.

Paper Nr: 35
Title:

Thompson Sampling in the Adaptive Linear Scalarized Multi Objective Multi Armed Bandit

Authors:

Saba Yahyaa

Abstract: In the stochastic multi-objective multi-armed bandit (MOMAB), arms generate a vector of stochastic normal rewards, one per objective, instead of a single scalar reward. As a result, there is not only one optimal arm, but there is a set of optimal arms (Pareto front) using Pareto dominance relation. The goal of an agent is to find the Pareto front. To find the optimal arms, the agent can use linear scalarization function that transforms a multi-objective problem into a single problem by summing the weighted objectives. Selecting the weights is crucial, since different weights will result in selecting a different optimum arm from the Pareto front. Usually, a predefined weights set is used and this can be computational inefficient when different weights will optimize the same Pareto optimal arm and arms in the Pareto front are not identified. In this paper, we propose a number of techniques that adapt the weights on the fly in order to ameliorate the performance of the scalarized MOMAB. We use genetic and adaptive scalarization functions from multi-objective optimization to generate new weights. We propose to use Thompson sampling policy to select frequently the weights that identify new arms on the Pareto front. We experimentally show that Thompson sampling improves the performance of the genetic and adaptive scalarization functions. All the proposed techniques improves the performance of the standard scalarized MOMAB with a fixed set of weights.

Paper Nr: 52
Title:

Exploration Versus Exploitation Trade-off in Infinite Horizon Pareto Multi-armed Bandits Algorithms

Authors:

Madalina Drugan and Bernard Manderick

Abstract: Multi-objective multi-armed bandits (MOMAB) are multi-armed bandits (MAB) extended to reward vectors. We use the Pareto dominance relation to assess the quality of reward vectors, as opposite to scalarization functions. In this paper, we study the exploration vs exploitation trade-off in infinite horizon MOMABs algorithms. Single objective MABs explore the suboptimal arms and exploit a single optimal arm. MOMABs explore the suboptimal arms, but they also need to exploit fairly all optimal arms. We study the exploration vs exploitation trade-off of the Pareto UCB1 algorithm. We extend UCB2 that is another popular infinite horizon MAB algorithm to rewards vectors using the Pareto dominance relation. We analyse the properties of the proposed MOMAB algorithms in terms of upper regret bounds. We experimentally compare the exploration vs exploitation trade-off of the proposed MOMAB algorithms on a bi-objective Bernoulli environment coming from control theory.

Paper Nr: 55
Title:

AGAGD - An Adaptive Genetic Algorithm Guided by Decomposition for Solving PCSPs

Authors:

Lamia Sadeg-Belkacem

Abstract: Solving a Partial Constraint Satisfaction Problem consists in assigning values to all the variables of the problem such that a maximal subset of the constraints is satisfied. An efficient algorithm for large instances of such problems which are NP-hard does not exist yet. Decomposition methods enable to detect and exploit some crucial structures of the problems like the clusters, or the cuts, and then apply that knowledge to solve the problem. This knowledge can be explored by solving the different sub-problems separately before combining all the partial solutions in order to obtain a global one. This was the focus of a previous work which led to some generic algorithms based on decomposition and using an adaptive genetic algorithm, for solving the subproblems induced by the crucial structures coming from the decomposition. This paper aims to explore the decomposition differently. Indeed, here the knowledge is used to improve this adaptive genetic algorithm. A new adaptive genetic algorithm guided by structural knowledge is proposed. It is designed to be generic in order that any decomposition method can be used and different heuristics for the genetic operators are possible. To prove the effectiveness of this approach, three heuristics for the crossover step are investigated.

Paper Nr: 81
Title:

Rejecting Foreign Elements in Pattern Recognition Problem - Reinforced Training of Rejection Level

Authors:

Wladyslaw Homenda

Abstract: Standard assumption of pattern recognition problem is that processed elements belong to recognized classes. However, in practice, we are often faced with elements presented to recognizers, which do not belong to such classes. For instance, paper-to-computer recognition technologies (e.g. character or music recognition technologies, both printed and handwritten) must cope with garbage elements produced at segmentation level. In this paper we distinguish between elements of desired classes and other ones. We call them native and foreign elements, respectively. The assumption that we have only native elements results in incorrect inclusion of foreign ones into desired classes. Since foreign elements are usually not known at the stage of recognizer construction, standard classification methods fail to eliminate them. In this paper we study construction of recognizers based on support vector machines and aimed on coping with foreign elements. Several tests are performed on real-world data.

Paper Nr: 89
Title:

Activity Recognition for Dogs Using Off-the-Shelf Accelerometer

Authors:

Tatsuya Kiyohara, Ryohei Orihara and Yuichi Sei

Abstract: Dogs are one of the most popular pets in the world, and more than 10 million dogs are bred annually in Japan now (JPFA, 2013). Recently, primitive commercial services have been started that record dogs’ activities and report them to their owners. Although it is expected that an owner would like to know the dog’s activity in greater detail, a method proposed in a previous study has failed to recognize some of the key actions. The demand for their identification is highlighted in responses to our questionnaire. In this paper, we show a method to recognize the actions of the dog by attaching only one off-the-shelf acceleration sensor to the neck of the dog. We apply DTW-D which is the state-of-the-art time series data search technique for activity recognition. Application of DTW-D to activity recognition of an animal is unprecedented according to our knowledge, and thus is the main contribution of this study. As a result, we were able to recognize ten different activities with 65.8% classification F-measure.

Paper Nr: 94
Title:

Beyond Onboard Sensors in Robotic Swarms - Local Collective Sensing through Situated Communication

Authors:

Tiago Rodrigues and Miguel Duarte

Abstract: The constituent robots in swarm robotics systems are typically equipped with relatively simple, onboard sensors of limited quality and range. When robots have the capacity to communicate with one another, communication has so far been exclusively used for coordination. In this paper, we present a novel approach in which local, situated communication is leveraged to overcome the sensory limitations of the individual robots. In our approach, robots share sensory inputs with neighboring robots, thereby effectively extending each other’s sensory capabilities. We evaluate our approach in a series of experiments in which we evolve controllers for robots to capture mobile preys. We compare the performance of (i) swarms that use our approach, (ii) swarms in which robots use only their limited onboard sensors, and (iii) swarms in which robots are equipped with ideal sensors that provide extended sensory capabilities without the need for communication. Our results show that swarms in which local communication is used to extend the sensory capabilities of the individual robots outperform swarms in which only onboard sensors are used. Our results also show that in certain experimental configurations, the performance of swarms using our approach is close to the performance of swarms with ideal sensors.

Paper Nr: 95
Title:

Reactive Recovery from Machine Breakdown in Production Scheduling with Temporal Distance and Resource Constraints

Authors:

Roman Barták and Marek Vlk

Abstract: One of the classical problems of real-life production scheduling is dynamics of manufacturing environments with new production demands coming and breaking machines during the schedule execution. Simple rescheduling from scratch in response to unexpected events occurring on the shop floor may require excessive computation time. Moreover, the recovered schedule may be deviated prohibitively from the ongoing schedule. This paper studies two methods how to modify a schedule in response to a resource failure: rightshift of affected activities and simple temporal network recovery. The importance is put on the speed of the rescheduling procedures as well as on the minimum deviation from the original schedule. The scheduling model is motivated by the FlowOpt project, which is based on Temporal Networks with Alternatives and supports simple temporal constraints between the activities.

Paper Nr: 96
Title:

The Art of Balance - Problem-Solving vs. Pattern-Recognition

Authors:

Martyn Lloyd-Kelly

Abstract: The dual-process theory of human cognition proposes the existence of two systems for decision-making: a slower, deliberative, ``problem-solving'' system and a quicker, reactive, ``pattern-recognition'' system. The aim of this work is to explore the effect on agent performance of altering the balance of these systems in an environment of varying complexity. This is an important question, both in the realm of explanations of expert behaviour and to AI in general. To achieve this, we implement three distinct types of agent, embodying different balances of their problem-solving and pattern-recognition systems, using a novel, hybrid, human-like cognitive architecture. These agents are then situated in the virtual, stochastic, multi-agent ``Tileworld'' domain, whose intrinsic and extrinsic environmental complexity can be precisely controlled and widely varied. This domain provides an adequate test-bed to analyse the research question posed. A number of computational simulations are run. Our results indicate that there is a definite performance benefit for agents which use a mixture of problem-solving and pattern-recognition systems, especially in highly complex environments.

Paper Nr: 98
Title:

Information Assistance for Smart Assembly Stations

Authors:

Mario Aehnelt and Sebastian Bader

Abstract: Information assistance helps in many application domains to structure, guide and control human work processes. However, it lacks a formalisation and automated processing of background knowledge which vice versa is required to provide ad-hoc assistance. In this paper, we describe our conceptual and technical work to include contextual background knowledge in raising awareness, guiding, and monitoring the assembly worker. We present cognitive architectures as missing link between highly sophisticated manufacturing data systems and implicitly available contextual knowledge on work procedures and concepts of the work domain. Our work is illustrated with examples in SWI-Prolog and the Soar cognitive architecture.

Paper Nr: 106
Title:

A Particle Swarm Optimizer for Solving the Set Partitioning Problem in the Presence of Partitioning Constraints

Authors:

Gerrit Anders

Abstract: Solving the set partitioning problem (SPP) is at the heart of the formation of several organizational structures in multi-agent systems (MAS). In large-scale MAS, these structures can improve scalability and enable cooperation between agents with (different) limited resources and capabilities. In this paper, we present a discrete Particle Swarm Optimizer, i.e., a metaheuristic, that solves the NP-hard SPP in the context of partitioning constraints – which restrict the structure of valid partitionings in terms of acceptable ranges for the number and the size of partitions – in a general manner. It is applicable to a broad range of applications in which regional or global knowledge is available. For example, our algorithm can be used for coalition structure generation, strict partitioning clustering (with outliers), anticlustering, and, in combination with an additional control loop, even for the creation of hierarchical system structures. Our algorithm relies on basic set operations to come to a solution and, as our evaluation shows, finds high-quality solutions in different scenarios.

Paper Nr: 109
Title:

Computing Inconsistency Using Logical Argumentation

Authors:

Badran Raddaoui

Abstract: Measuring the degree of conflict of a knowledge base can help us to deal with inconsistencies. Several semantic and syntax based approaches have been proposed separately. In this paper, we use logical argumentation as a field to compute the inconsistency measure for propositional formulae. We show using the complete argumentation tree that our family of measures is able to express finely the inconsistency of a formula following their context and allows us to distinguish between formulae. We extend our measure to quantify the degree of inconsistency of set of formulae and give a general formulation of the inconsistency using some logical properties.

Paper Nr: 112
Title:

Multiagent Planning by Plan Set Intersection and Plan Verification

Authors:

Jan Jakubův

Abstract: Multiagent planning is a coordination technique used for deliberative acting of a team of agents. One of vital planning techniques uses declarative description of agents’ plans based on Finite State Machines and their later coordination by intersection of such machines with successive verification of the resulting joint plans. In this work, we firstly propose to use projections of agents’ actions directly for multiagent planning based on iterative building of a coordinated multiagent plan. Secondly, we describe integration of the static analysis provided by process calculi type systems for approximate verification of exchanged local plans. Finally, we compare our approach with current state-of-the-art planner on an extensive benchmark set.

Paper Nr: 117
Title:

Fast Solving of Influence Diagrams for Multiagent Planning on GPU-enabled Architectures

Authors:

Fadel Adoe

Abstract: Planning under uncertainty in multiagent settings is highly intractable because of history and plan space complexities. Probabilistic graphical models exploit the structure of the problem domain to mitigate the computational burden. In this paper, we introduce the first parallelization of planning in multiagent settings on a CPU-GPU heterogeneous system. In particular, we focus on the algorithm for exactly solving interactive dynamic influence diagrams, which is a recognized graphical models for multiagent planning. Beyond parallelizing the standard Bayesian inference, the computation of decisions' expected utilities are parallelized. The GPU-based approach provides significant speedup on two benchmark problems.

Paper Nr: 128
Title:

LS2C – A Platform to Design, Implement and Execute Social Computations

Authors:

Flavio S. Correa Da Silva

Abstract: Social computers have been characterised as goal oriented complex systems comprised of humans as well as computational devices. Such systems can be found in natura in a variety of scenarios, as well as designed to tackle specific issues of social and economic relevance. In the present article we introduce the Lightweight Situated Social Calculus (LS2C) as a language to design executable specifications of interaction protocols for social computations. Additionally, we describe a platform to process these specifications, giving them a computational realisation. We argue that LS2C can be used to design, implement and execute social computations.

Paper Nr: 142
Title:

A Qualitative Representation of a Figure and Construction of Its Planar Class

Authors:

Kazuko Takahashi

Abstract: PLCA is a framework for qualitative spatial reasoning. It provides a symbolic expression of spatial entities and allows reasoning on this expression. A figure is represented using the objects used to construct it, that is, points, lines, circuits and areas, as well as the relationships between them without numerical data. The figure is identified by the patterns of connection between the objects. For a given PLCA expression, the conditions for planarity, that is, an existence of the corresponding figure on a two-dimensional plane, have been shown; however, the construction of such a PLCA expression has not been discussed. In this paper, we describe a method of constructing such expressions inductively, and prove that the resulting class coincides with that of the planar PCLA. The part of the proof is implemented using a proof assistant Coq.

Paper Nr: 169
Title:

Offline Evolution of Normative Systems

Authors:

Magnus Hjelmblom

Abstract: An approach to the pre-runtime design of normative systems for problem-solving multi-agent systems (MAS) is suggested. A key element of this approach is to employ evolutionary mechanisms to evolve efficient normative systems. To illustrate, a genetic algoritm is used in the process of designing a normative system for an example MAS based on the DALMAS architecture for norm-regulated MAS. It is demonstrated that an evolutionary algorithm may be a useful tool when designing norms for problem-solving MAS.

Short Papers
Paper Nr: 4
Title:

A Cross-lingual Part-of-Speech Tagging for Malay Language

Authors:

Norshuhani Zamin and Zainab Abu Bakar

Abstract: Cross-lingual annotation projection methods can benefit from rich-resourced languages to improve the performance of Natural Language Processing (NLP) tasks in less-resourced languages. In this research, Malay is experimented as the less-resourced language and English is experimented as the rich-resourced language. The research is proposed to reduce the deadlock in Malay computational linguistic research due to the shortage of Malay tools and annotated corpus by exploiting state-of-the-art English tools. This paper proposed a cross-lingual annotation projection based on word alignment of two languages with syntactical differences. A word alignment method known as MEWA (Malay-English Word Aligner) that integrates a Dice Coefficient and bigram string similarity measure is proposed. MEWA is experimented to automatically induced annotations using a Malay test collection on terrorism and an identified English tool. In the POS annotation projection experiment, the algorithm achieved accuracy rate of 79%.

Paper Nr: 7
Title:

Speeding up Online POMDP Planning - Unification of Observation Branches by Belief-state Compression Via Expected Feature Values

Authors:

Gavin Rens

Abstract: A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in belief-decision-trees while planning occurs. Whereas belief-decision-trees branch on actions and observations, with my method, they branch only on actions. This is achieved by unifying the branches required due to the nondeterminism of observations. The method is based on the expected values of domain features. The new algorithm is experimentally compared to three other online POMDP algorithms, outperforming them on the given test domain.

Paper Nr: 18
Title:

Intelligent Agents - Conversations from Human-agent Imitation Games

Authors:

Kevin Warwick and Huma Shah

Abstract: What do humans say/ask beyond initial greetings? Are humans always the best at conversation? How easy is it to distinguish an intelligent human from an ‘intelligent agent’ just from their responses to unrestricted questions during a conversation? This paper presents an insight into the nature of human communications, including behaviours and interactions, from a type of interaction - stranger-to-stranger discourse realised from implementing Turing’s question-answer imitation games at Bletchley Park UK in 2012 as part of the Turing centenary commemorations. The authors contend that the effects of lying, misunderstanding, humour and lack of shared knowledge during human-machine and human-human interactions can provide an impetus to building better conversational agents increasingly deployed as virtual customer service agents. Applying the findings could improve human-robot interaction, for example as conversational companions for the elderly or unwell. But do we always want these agents to talk like humans do? Suggestions to advance intelligent agent conversation are provided.

Paper Nr: 19
Title:

Choroid Characterization in EDI OCT Retinal Images Based on Texture Analysis

Authors:

A. Gonzalez-Lopez and B. Remeseiro

Abstract: Optical Coherence Tomography (OCT) is a widely extended imaging technique in the opthalmic field for diagnostic purposes. Since layers composing retina can be identified in these images, several image processingbased methods have been presented to segment them automatically in these images, with the aim of developing medical-support applications. Recently, appearance of Enhanced Depth Imaging (EDI) OCT allows to tackle exploration of the choroid which provides high information of eye processes. Therefore, segmentation of choroid layer has become one of the more relevant problems tackled in this field, but it presents different features that rest of the layers. In this work, a novel texture-based study is proposed in order to show that textural information can be used to characterize this layer. A pattern recognition process is carried out by using different descriptors and a process of classification, considering marks performed by two experts for validation. Results show that characterization using texture features is effective with rates over 90% of success.

Paper Nr: 21
Title:

A Probabilistic Doxastic Temporal Logic for Reasoning about Beliefs in Multi-agent Systems

Authors:

Karsten Martiny and Ralf Möller

Abstract: We present Probabilistic Doxastic Temporal (PDT) Logic, a formalism to represent and reason about probabilistic beliefs and their evolution in multi-agent systems. It can quantify beliefs through probability intervals and incorporates the concepts of frequency functions and epistemic actions. We provide an appropriate semantics for PDT and show how agents can update their beliefs with respect to their observations.

Paper Nr: 36
Title:

Handling Default Data under a Case-based Reasoning Approach

Authors:

Bruno Fernandes, Mauro Freitas and Cesar Analide

Abstract: The knowledge acquired through past experiences is of the most importance when humans or machines try to find solutions for new problems based on past ones, which makes the core of any Case-based Reasoning approach to problem solving. On the other hand, existent CBR systems are neither complete nor adaptable to specific domains. Indeed, the effort to adapt either the reasoning process or the knowledge representation mechanism to a new problem is too high, i.e., it is extremely difficult to adapt the input to the computational framework in order to get a solution to a particular problem. This is the drawback that is addressed in this work.

Paper Nr: 42
Title:

Bioplausible Multiscale Filtering in Retinal to Cortical Processing as a Model of Computer Vision

Authors:

Nasim Nematzadeh

Abstract: Visual illusions emerge as an attractive field of research with the discovery over the last century of a variety of deep and mysterious mechanisms of visual information processing in the human visual system. Among many classes of visual illusion relating to shape, brightness, colour and motion, “geometrical illusions” are essentially based on the misperception of orientation, size, and position. The main focus of this paper is on illusions of orientation, sometimes referred to as “tilt illusions”, where parallel lines appear not to be parallel, a straight line is perceived as a curved line, or angles where lines intersect appear larger or smaller. Although some low level and high level explanations have been proposed for geometrical tilt illusions, a systematic explanation based on model predictions of both illusion magnitude and local tilt direction is still an open issue. Here a neurophysiological model is expounded based on Difference of Gaussians implementing a classical receptive field model of retinal processing that predicts tilt illusion effects.

Paper Nr: 46
Title:

Comparison of Power Consumption Reduce Effect of Intelligent Lighting System and Lighting Control System Using Motion Sensors

Authors:

Katsunori Onobayashi, Yuki Sakakibara and Hiromitsu Nakabayashi

Abstract: Designed in accordance with conventional uniform lighting systems, lighting control systems that use motion sensors allow lighting control per area because they only switch on the lights linked to the motion sensors. However, further power consumption reductions can be possible using a dimmer control for individual lights to supply the level of brightness desired by each worker (hereafter referred to as target illuminance) instead of the per-area method. In the present study, we therefore conducted a comparative experiment with regard to the power consumption of a lighting control system that uses motion sensors and a system that controls the lighting for each worker (hereafter referred to as intelligent lighting system). The validity of the power consumption reduction in offices where intelligent lighting system was introduced was determined using a comparative simulation. A simulation was performed for various worker patterns in a mock-up of an actual office environment to verify the validity of the proposed system. The simulation results showed the effectiveness of the proposed method under all work patterns and thus indicated that the intelligent lighting system saves more energy than the lighting control system that uses motion sensors.

Paper Nr: 48
Title:

The Localization of Mindstorms NXT in the Magnetic Unstable Environment Based on Histogram Filtering

Authors:

Piotr Artiemjew

Abstract: During the localization of a robot equipped with a magnetic compass we can encounter the problem of magnetic deviations in the building. It can be caused by electric power sources, working devices or even by the heating system. Magnetic deviations make it difficult to localize the robot in a proper way and could cause the loss of position on the map. In the paper we have tested the method of histogram localization using a map of north directions and emergency north direction. For tests we have designed the robot based on the Mindstorms NXT parts. Our construction consists of NXT brick, four sonars, one compass, one touch sensor and two sensor multiplexers. All software was developed in C++ in the NXT++ library, which is actively supported by the author. Tests were performed in the real environment and the proposed tuned localization method turned out to be resistant to magnetic deviations

Paper Nr: 50
Title:

Stopwords Identification by Means of Characteristic and Discriminant Analysis

Authors:

Giuliano Armano

Abstract: Stopwords are meaningless, non-significant terms that frequently occur in a document. They should be removed, like a noise. Traditionally, two different approaches of building a stoplist have been used: the former considers the most frequent terms looking at a language (e.g., english stoplist), the other includes the most occurring terms in a document collection. In several tasks, e.g., text classification and clustering, documents are typically grouped into categories. We propose a novel approach aimed at automatically identifying specific stopwords for each category. The proposal relies on two unbiased metrics that allow to analyze the informative content of each term; one measures the discriminant capability and the latter measures the characteristic capability. For each term, the former is expected to be high in accordance with the ability to distinguish a category against others, whereas the latter is expected to be high according to how the term is frequent and common over all categories. A preliminary study and experiments have been performed, pointing out our insight. Results confirm that, for each domain, the metrics easily identify specific stoplist wich include classical and category-dependent stopwords.

Paper Nr: 56
Title:

The Application of Learning Theories into Abdullah: An Intelligent Arabic Conversational Agent Tutor

Authors:

Omar G. Alobaidi and Keeley Crockett

Abstract: This paper outlines the research and development of a Conversational Intelligent tutoring System (CITS) named Abdullah focusing on the novel application of learning theories. Abdullah CITS is a software program intended to converse with students aged 10 to 12 years old about the essential topics in Islam in natural language. The CITS aims to mimic human Arabic tutor by engaging the students in dialogue using Modern Arabic language (MAL), and classical Arabic language (CAL), utilizing supportive evidence from the Quran and Hadith. Abdullah CITS is able to capture the user’s level of knowledge and adapt the tutoring session and tutoring style to suit that particular learner’s level of knowledge. This is achieved through the inclusion of several learning theories implemented in Abdullah’s architecture, which are applied to make the tutoring suited to an individual learner. There are no known specific learning theories for CITS therefore the novelty of the approach is in the combination of well-known learning theories typically employed in a classroom environment. The system was evaluated through end user testing with the target age group in schools in Jordan and the UK. The initial evaluation has produced some positive results, indicating that Abdullah is gauging the individual learner’s knowledge level and adapting the tutoring session to ensure learning gain is achieved.

Paper Nr: 66
Title:

Simple Temporal Networks with Partially Shrinkable Uncertainty

Authors:

Andreas Lanz and Roberto Posenato

Abstract: The Simple Temporal Network with Uncertainty (STNU) model focuses on the representation and evaluation of temporal constraints on time-point variables (timepoints), of which some (i.e., contingent timepoints) cannot be assigned (i.e., executed by the system), but only be observed. Moreover, a temporal constraint is expressed as an admissible range of delays between two timepoints. Regarding the STNU model, it is interesting to determine whether it is possible to execute all the timepoints under the control of the system, while still satisfying all given constraints, no matter when the contingent timepoints happen within the given time ranges (controllability check). Existing approaches assume that the original contingent time range cannot be modified during execution. In real world, however, the allowed time range may change within certain boundaries, but cannot be completely shrunk. To represent such possibility more properly, we propose Simple Temporal Network with Partially Shrinkable Uncertainty (STNPSU) as an extension of STNU. In particular, STNPSUs allow representing a contingent range in a way that can be shrunk during run time as long as shrinking does not go beyond a given threshold. We further show that STNPSUs allow representing STNUs as a special case, while maintaining the same efficiency for both controllability checks and execution.

Paper Nr: 67
Title:

Impact on Bayesian Networks Classifiers When Learning from Imbalanced Datasets

Authors:

M. Julia Flores and José A. Gámez

Abstract: In this paper we present a study on the behaviour of some representative Bayesian Networks Classifiers (BNCs), when the dataset they are learned from presents imbalanced data, that is, there are far fewer cases labelled with a particular class value than with the other ones (assuming binary classification problems). This is a typical source of trouble in some datasets, and the development of more robust techniques is currently very important. In this study, we have selected a benchmark of 129 imbalanced datasets, and performed an analytical approach focusing on BNCs. Our results show good performance of these classifiers, that outperform decision trees (C4.5). Finally, an algorithm to improve the performance of any BNC is also given. We have carried out an experimentation where we show how the using of oversampling of the minority class to achieve the desired value for the imbalance ratio (IR), which is the division of the number of cases for the majority class by the cases of the minority class. From this work we can conclude that BNCs show a very good performance for imbalanced datasets, and that our proposal enhance their results for those datasets that provided poor results.

Paper Nr: 76
Title:

Airline Disruption Management - Dynamic Aircraft Scheduling with Ant Colony Optimization

Authors:

Henrique Sousa and Ricardo Teixeira

Abstract: Disruption management is one of the main concerns of any airline company, as it can influence its annual revenue by upwards of 3%. Most of medium to large airlines have specialized teams which focus on recovering disrupted schedules with very little automation. This paper presents a new automated approach to solve both the Aircraft Assignment Problem (AAP) and the Aircraft Recovering Problem (ARP), where the solutions are responsive to unforeseen events. The developed algorithm, based on Ant Colony Optimization, aims to minimize the operational costs involved and is designed to schedule and reschedule flights dynamically by using a sliding window. Test results tend to indicate that this approach is feasible, both in terms of time and quality of the proposed solutions.

Paper Nr: 80
Title:

Time Series Modelling with Fuzzy Cognitive Maps - Study on an Alternative Concept’s Representation Method

Authors:

Wladyslaw Homenda

Abstract: In the article we have discussed an approach to time series modelling based on Fuzzy Cognitive Maps (FCMs). We have introduced FCM design method that is based on replicated ordered time series data points. We named this representation method history h, where h is number of consecutive data points we gather. Custom procedure for concepts/nodes extraction follows the same convention. The objective of the study reported in this paper was to investigate how increasing h influences modelling accuracy. We have shown on a selection of 12 time series that the higher the h, the smaller the error. Increasing h improves model’s quality without increasing FCM’s size. The method is stable - gains are comparable for FCMs of different sizes.

Paper Nr: 90
Title:

A Declarative Model for Reasoning about Form Security

Authors:

Aaron Hunter

Abstract: We introduce a formal methodology for analysing the security of digital forms, by representing form signing procedures in a declarative action formalism. In practice, digital forms are represented as XML documents and the security of information is guaranteed through the use of digital signatures. However, the security of a form can be compromised in many different ways. For example, an honest agent might be convinced to make a commitment that they do not wish to make or they may be fooled into believing that another agent has committed to something when they have not. In many cases, these attacks do not require an intruder to break any form of encryption or digital signature; instead, the intruder simply needs to manipulate the way signatures are applied and forms are passed between agents. In this paper, we demonstrate that form signing procedures can actually be seen as a variation of the message passing systems used in connection with cryptographic protocols. We start with an existing declarative model for reasoning about cryptographic protocols in the Situation Calculus, and we show how it can be extended to identify security issues related to digital signatures, and form signing procedures. We suggest that our results could be used to help users create secure digital forms, using tools such as IBM’s Lotus Forms software.

Paper Nr: 99
Title:

Towards a Generic Architecture for Recommenders Benchmarking

Authors:

Mohamed Ramzi Haddad and Hajer Baazaoui

Abstract: With current growth of internet sales and content consumption, more research efforts are focusing on developing recommendation and personalization algorithms as a solution for the choice overload problem. In this paper, we first enumerate several state-of-the-art recommendation algorithms in order to highlight their main ideas and methodologies. Then, we propose a generic architecture for recommender systems benchmarking. Using the proposed architecture, we implement and evaluate several variants of existing recommendation algorithms and compare their results to our unified recommendation model. The experiments are conducted on a real world dataset in order to assess the genericity of our recommendation model and its quality. At the end, we conclude with some ideas for further development and research.

Paper Nr: 114
Title:

Model Guided Sampling Optimization for Low-dimensional Problems

Authors:

Lukáš Bajer and Martin Holeňa

Abstract: Optimization of very expensive black-box functions requires utilization of maximum information gathered by the process of optimization. Model Guided Sampling Optimization (MGSO) forms a more robust alternative to Jones’ Gaussian-process-based EGO algorithm. Instead of EGO’s maximizing expected improvement, the MGSO uses sampling the probability of improvement which is shown to be helpful against trapping in local minima. Further, the MGSO can reach close-to-optimum solutions faster than standard optimization algorithms on low dimensional or smooth problems.

Paper Nr: 121
Title:

Fast Item-Based Collaborative Filtering

Authors:

David Ben Shimon and Lior Rokach

Abstract: Item-based Collaborative Filtering (CF) models offer good recommendations with low latency. Still, constructing such models is often slow, requiring the comparison of all item pairs, and then caching for each item the list of most similar items. In this paper we suggest methods for reducing the number of item pairs comparisons, through simple clustering, where similar items tend to be in the same cluster. We propose two methods, one that uses Locality Sensitive Hashing (LSH), and another that uses the item consumption cardinality. We evaluate the two methods demonstrating the cardinality based method reduce the computation time dramatically without damage the accuracy.

Paper Nr: 137
Title:

The In-between Machine - The Unique Value Proposition of a Robot or Why we are Modelling the Wrong Things

Authors:

Johan Hoorn, Elly Konijn and Desmond Germans

Abstract: We avow that we as researchers of artificial intelligence may have properly modelled psychological theories but that we overshot our goal when it came to easing loneliness of elderly people by means of social robots. Following the event of a documentary film shot about our flagship machine Hanson’s Robokind “Alice” together with supplementary observations and research results, we changed our position on what to model for usefulness and what to leave to basic science. We formulated a number of effects that a social robot may provoke in lonely people and point at those imperfections in machine performance that seem to be tolerable. We moreover make the point that care offered by humans is not necessarily the most preferred – even when or sometimes exactly because emotional concerns are at stake.

Paper Nr: 145
Title:

Automatic Political Profiling in Heterogeneous Corpora

Authors:

Hodaya Uzan and Esther David

Abstract: In this paper we consider automatic political tendency recognition in a variety of genres. To this end, four different types of texts in Hebrew with varying levels of political content (manifestly political, semipolitical, non-political) are examined. It is found that in each case, training and testing in the same genre yields strong results. More significantly, training on political texts yields classifiers sufficiently strong to classify non-political personal Facebook pages with fair accuracy. This suggests that individuals’ political tendencies can be identified without recourse to any tagged personal data.

Paper Nr: 146
Title:

A Knowledge Based Framework for Case-specific Diagnosis

Authors:

Ganesh Ram Santhanam and Gopalakrishnan Sivaprakasam

Abstract: We present a framework whereby the expert knowledge of a domain is represented as a description logic knowledge base. Based on this framework, we present an approach that uses a knowledge based system for diagnosis that allows users to key in findings for a case, and obtain the corresponding differential diagnosis for a case. The framework also prompts hypothetical findings that can effectively guide the user towards a targeted diagnosis. The framework allows iterative and interactive updates of the case specific knowledge. Computing the differential diagnosis and hypotheses can be formulated directly as conjunctive queries on the original knowledge base using the case specific knowledge. We illustrate the applicability of our framework in the context of medical diagnosis, although the approach is equally applicable in a broad range of diagnosis problems such as network forensics and criminal investigation.

Paper Nr: 150
Title:

An Artificial Immune Approach for Optimizing Crowd Emergency Evacuation Route Planning Problem

Authors:

Mohd Nor Akmal Khalid and Umi Kalsom Yusof

Abstract: Disastrous situations, either naturally (such as fires, earthquake, rising tides, hurricane) or man-made (such as terrorist bombings, chemical spills, and so on), have claimed the lives of thousands. As such, optimizing the evacuation operations during an emergency situation would require an effective crowd evacuation plan, which is acknowledged to be one of the vital studies of the societal research as well as emergency route planning (ERP) community. Several descriptions of prior developed approaches for emergency evacuation that encompassed the needs of a variety of public community as well as fulfilling the complexity of the situation, are summed up and discussed. This paper introduces an immune algorithm (IA) to optimize the evacuation plan for solving the ERP problems. The approach is first validated against previous work while further experimentation reveals the effectiveness of the proposed IA, with regard to certain parameter calibrations, in the context of ERP problems. The findings have been summarized and presented, whereas the potential for future work is identified.

Paper Nr: 151
Title:

Finding Resilient Solutions for Dynamic Multi-Objective Constraint Optimization Problems

Authors:

Maxime Clement and Tenda Okimoto

Abstract: Systems Resilience is a large-scale multi-disciplinary research that aims to identify general principles underlying the resilience of real world complex systems. Many conceptual frameworks have been proposed and discussed in the literature since Holling’s seminal paper (1973). Schwind et al. (2013) recently adopted a computational point of view of Systems Resilience, and modeled a resilient system as a dynamic constraint optimization problem. However, many real world optimization problems involve multiple criteria that should be considered separately and optimized simultaneously. Also, it is important to provide an algorithm that can evaluate the resilience of a dynamic system. In this paper, a framework for Dynamic Multi-Objective Constraint Optimization Problem (DMO-COP) is introduced and two solution criteria for solving this problem are provided, namely resistance and functionality, which are properties of interest underlying the resilience for DMO-COPs. Also, as an initial step toward developing an efficient algorithm for finding resilient solutions of a DMO-COP, an algorithm called Algorithm for Systems Resilience (ASR), which computes every resistant and functional solution for DMO-COPs, is presented and evaluated with different types of dynamical changes.

Paper Nr: 153
Title:

Measuring Adaptability of ”Swarm Intelligence” for Resource Scheduling and Optimization in Real Time

Authors:

Petr Skobelev, Igor Mayorov and Sergey Kozhevnikov

Abstract: In this paper modern methods of scheduling and resource optimization based on the holonic approach and principles of “Swarm Intelligence” are considered. The developed classes of holonic agents and method of adaptive real time scheduling where every agent is connected with individual satisfaction function by the set of criteria and bonus/penalty function are discussed. In this method the plan is considered as a un-stable equilibrium (consensus) of agents interests in dynamically self-organized network of demands and supply agents. The self-organization of plan demonstrates a “swarm intelligence” by spontaneous autocatalitical reactions and other not-linear behaviours. It is shown that multi-agent technology provides a generic framework for developing and researching various concepts of “Swarm Intelligence” for real time adaptive event-driving scheduling and optimization. The main result of research is the developed approach to evaluate the adaptability of “Swarm Intelligence” by measuring improve of value and transition time from one to another unstable state in case of disruptive events processing. Measuring adaptability helps to manage self-organized systems and provide better quality and efficiency of real time scheduling and optimization. This approach is under implementation in multi-agent platform for adaptive resource scheduling and optimization. The results of first experiments are presented and future steps of research are discussed.

Paper Nr: 156
Title:

Creation of Emotion-inducing Scenarios using BDI

Authors:

Pierre Olivier Brosseau and Claude Frasson

Abstract: Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, existing methods to induce emotions are mostly limited to audio and visual stimulations. This study tested the induction of emotions in a virtual environment with scenarios that were designed using the Belief-Desire-Intention (BDI) model, well-known in the Agent community. The first objective of the study was to design the virtual environment and a set of scenarios happening in driving situations. These situations can generate various emotional conditions or reactions and the design was followed by a testing phase using an EEG headset able to assess the resulting emotions (frustration, boredom and excitement) of 30 participants to verify how accurate the predicted emotion could be induced. The study phase proved the reliability of the BDI model, with over 70% of our scenarios working as expected. Finally, we outline some of the possible uses of inducing emotions in a virtual environment for correcting negative emotions.

Paper Nr: 161
Title:

A Constructivist Approach to Rule Bases

Authors:

Giovanni Sileno

Abstract: The paper presents a set of algorithms for the conversion of rule bases between priority-based and constraint-based representations. Inspired by research in precedential reasoning in law, such algorithms can be used for the analysis of a rule base, and for the study of the impact of the introduction of new rules. In addition, the paper explores an optimization mechanism, built upon assumptions about the world in which the rule-based system operates, providing a model of environmental adaptation. The investigation is relevant to practical reasoning, agent modeling and agent programming.

Paper Nr: 164
Title:

Design of Communication and Control for Swarms of Aquatic Surface Drones

Authors:

Anders Lyhne Christensen, Sancho Oliveira, Octavian Postolache, Maria João de Oliveira, Susana Sargento, Pedro Santana, Luis Nunes, Fernando Velez, Pedro Sebastião, Vasco Costa, Miguel Duarte and Jorge Gomes

Abstract: The availability of relatively capable and inexpensive hardware components has made it feasible to consider large-scale systems of autonomous aquatic drones for maritime tasks. In this paper, we present the CORATAM and HANCAD projects, which focus on the fundamental challenges related to communication and control in swarms of aquatic drones. We argue for: (i) the adoption of a heterogeneous approach to communication in which a small subset of the drones have long-range communication capabilities while the majority carry only short-range communication hardware, and (ii) the use of decentralized control to facilitate inherent robustness and scalability. A heterogeneous communication system and decentralized control allow for the average drone to be kept relatively simple and therefore inexpensive. To assess the proposed methodology, we are currently building 25 prototype drones from off-the-shelf components. We present the current hardware designs and discuss the results of simulation-based experiments involving swarms of up to 1,000 aquatic drones that successfully patrolled a 20 km-long strip for 24 hours.

Paper Nr: 166
Title:

Data Mining for Automatic Linguistic Description of Data - Textual Weather Prediction as a Classification Problem

Authors:

J. Janeiro and I. Rodriguez-Fdez

Abstract: In this paper we present the results and performance of five different classifiers applied to the task of automatically generating textual weather forecasts from raw meteorological data. The type of forecasts this methodology can be applied to are template-based ones, which can be transformed into an intermediate language that can directly mapped to classes (or values of variables). Experimental validation and tests of statistical significance were conducted using nine datasets from three real meteorological publicly accessible websites, showing that RandomForest, IBk and PART are statistically the best classifiers for this task in terms of F-Score, with RandomForest providing slightly better results.

Paper Nr: 170
Title:

Ontology Selection for Semantic Similarity Assessment

Authors:

Montserrat Batet and David Sanchez

Abstract: The assessment of the semantic similarity between concepts is a key tool to improve the understanding of text. The structured knowledge that ontologies provide has been extensively used to estimate similarities with encouraging results. However, in many domains, several ontologies modelling the same concepts in different ways are available. In such scenarios, the most suitable ontology for similarity calculation should be selected. In this paper we tackle this task by proposing an unsupervised method to select the ontology that seems to enable the most accurate similarity assessments. By studying the ontology features that most influence the similarity accuracy, we propose a score that captures them in a mathematically coherent way. Then, the most suitable ontology can be selected as that with the highest score. We also report the results of the proposed method for several well-known ontologies and a widely-used semantic similarity benchmark.

Paper Nr: 175
Title:

Self-Consciousness Cannot Be Programmed

Authors:

Jinchang Wang

Abstract: We investigate the issue about whether a computer can be self-aware or self-conscious. We derive logically that if a machine can be copied or duplicated then it cannot be self-aware. Programs of a digital computer are copiable, therefore self-consciousness cannot be programmed. Self-awareness is an insurmountable stumbling block for a digital computer to achieve full range of human consciousness. A robot cannot be self-conscious unless it is not copiable.

Paper Nr: 177
Title:

Modeling Post-level Sentiment Evolution in Online Forum Threads

Authors:

Dumitru-Clementin Cercel and Stefan Trausan-Matu

Abstract: Opinion propagation analysis in online forum threads is a relatively new research field emerging in the context of the increasing popularity of forums. Many changes occur over time in online forum threads since new users intervene in the discussion and express their opinions. In this paper, we propose a novel task in the analysis of opinion propagation in online forum threads, i.e. the modeling of post-level sentiment evolution in online forum threads. This task consists in the analysis of post-level sentiment evolution in an online forum thread in order to obtain a simplified model of this evolution. Based on opinion mining, graph theory, and post-level sentiment analysis, our method comprises five steps: removal of posts containing only facts, post-level sentiment identification, removal of posts with neutral sentiment, aggregation of parent-child vertices, and aggregation of sibling vertices. We evaluate the proposed method on real-world forum threads, and the results of our experiments are presented in the visualization interfaces.

Paper Nr: 180
Title:

Quantifying Depth and Complexity of Thinking and Knowledge

Authors:

Tamal T. Biswas and Kenneth W. Regan

Abstract: Qualitative approaches to cognitive rigor and depth and complexity are broadly represented by Webb’s Depth of Knowledge and Bloom’s Taxonomy. Quantitative approaches have been relatively scant, and some have been based on ancillary measures such as the thinking time expended to answer test items. In competitive chess and other games amenable to incremental search and expert evaluation of options, we show how depth and complexity can be quantified naturally. We synthesize our depth and complexity metrics for chess into measures of difficulty and discrimination, and analyze thousands of games played by humans and computers by these metrics. We show the extent to which human players of various skill levels evince shallow versus deep thinking, and how they cope with ‘difficult’ versus ‘easy’ move decisions. The goal is to transfer these measures and results to application areas such as multiple-choice testing that enjoy a close correspondence in form and item values to the problem of finding good moves in chess positions.

Paper Nr: 181
Title:

Designing Intelligent Agents to Judge Intrinsic Quality of Human Decisions

Authors:

Tamal T. Biswas

Abstract: Research on judging decisions made by fallible (human) agents is not as much advanced as research on finding optimal decisions. Human decisions are often influenced by various factors, such as risk, uncertainty, time pressure, and depth of cognitive capability, whereas decisions by an intelligent agent (IA) can be effectively optimal without these limitations. The concept of `depth', a well-defined term in game theory (including chess), does not have a clear formulation in decision theory. To quantify `depth' in decision theory, we can configure an IA of supreme competence to `think' at depths beyond the capability of any human, and in the process collect evaluations of decisions at various depths. One research goal is to create an intrinsic measure of the depth of thinking required to answer certain test questions, toward a reliable means of assessing their difficulty apart from item-response statistics. We relate the depth of cognition by humans to depths of search, and use this information to infer the quality of decisions made, so as to judge the decision-maker from his decisions. We use large data from real chess tournaments and evaluations from chess programs (AI agents) of strength beyond all human players. We then seek to transfer the results to other decision-making fields in which effectively optimal judgments can be obtained from either hindsight, answer banks, powerful AI agents or from answers provided by judges of various competency.

Paper Nr: 183
Title:

Modeling Hierarchical Resources Within a Unified Ontology - A Position Paper

Authors:

Alexander Schiendorfer

Abstract: Resource-Intensive Software Ecosystems (RISE) can mainly be found in production management but also in virtually any socio-technical environment. RISE appear prominently in the form of smart grids or cloud environments where optimizing resource utilization and allocation becomes the most important aspect for competitive service provision. In such a context, the need for unified ontologies supported by adaptive software (i.e., software able to learn from and act on its environment) is highly attractive. Indeed, resources are mostly not monolithic entities but active and collaborative agents often organized in a hierarchical manner. A hierarchy implies multiple levels of abstraction leading to resource allocation on different levels of organization -- with abstractions being relevant for both inter- and intra-organization resource management. Once adequately defined, the use of constraint-based optimization algorithms on those multiple levels can provide efficient resource allocation. We apply, in this paper, ontological elements to model resources in a unified manner on multiple levels onto an example taken from distributed energy management. Then we present algorithmic ideas to organize the hierarchy of these resources.

Paper Nr: 185
Title:

A Haar Wavelet-based Multi-resolution Representation Method of Time Series Data

Authors:

Muhammad Marwan Muhammad Fuad

Abstract: Similarity search of time series can be efficiently handled through a multi-resolution representation scheme which offers the possibility to use pre-computed distances that are calculated and stored at indexing time and then utilized at query time together with filters in the form of exclusion conditions which speed up the search. In this paper we introduce a new multi-resolution representation and search framework of time series. Compared with our previous multi-resolution methods which use first degree polynomials to reduce the dimensionality of the time series at different resolution levels, the novelty of this work is that it applies Haar wavelets to represent the time series. This representation is particularly adapted to our multi-resolution approach as discrete wavelet transforms have the ability of reflecting the local and global information content at every resolution level thus enhancing the performance of the similarity search algorithm, which is what we have shown in this paper through extensive experiments on different datasets.

Posters
Paper Nr: 3
Title:

Services of Ambient Assistance for Elderly and/or Disabled Person in Health Intelligent Habitat

Authors:

Amina Makhlouf

Abstract: The life expectancy of people is increasing and related to that is an increase in the elderly population. The idea is to ensure that the elderly stay longer in their homes. A lot of projects work on ways allowing elderly persons to stay at home, these projects has focused to assess how a person copes by continuous monitoring of his/her activities through sensors measurements. The objective of this paper is to design a multimodal software system for managing two services of ambient assistance for elderly and/or disabled person in intelligent habitat health: Symptom Detection Service and Comfort Service. These services use several sensors installed in the home and on the persons, to collect information at any time about location and state of the person, and to ensure his comfort in the home. For helping decision maker choose appropriate assistance for these persons. This multimodal software platform is modeled by Colored Timed and Stochastic Petri nets (CTSPN) simulated in CPNTools.

Paper Nr: 13
Title:

Anywhere but Here - Enron’s Emails in the Midst of a Crisis

Authors:

Corey Taylor

Abstract: The emotional states of employees under stress are likely to manifest themselves in ways other in inter-personal interactions, namely, emails. The Enron Email Corpus was mined by both supervised and unsupervised methods to determine the degree to which this was true for Enron employees whilst the corporation was under investigation. Changes in language patterns were then compared against the timelines of the investigation. The method as described validates both the use of a subset of a very large corpus and the use of tagging methods to understand the patterns in various phrase types as used by Enron employees.

Paper Nr: 16
Title:

Human Visual System Based Framework For Gender Recognition

Authors:

Cherinet G. Zewdie and Hubert Konik

Abstract: A face reveals a great deal of information to a perceiver including gender. Humans use specific information (cue) from a face to recognize gender. The focus of this paper is to find out this cue when the Human Visual System (HVS) decodes gender of a face. The result can be used by a Computer Vision community to develop HVS inspired framework for gender recognition. We carried out a Pyscho-visual experiment to find which face region is most correlated with gender. Eye movements of 15 observers were recorded using an eye tracker when they performed gender recognition task under controlled and free viewing condition. Analysis of the eye movement shows that the eye region is the most correlated with gender recognition. We also proposed a HVS inspired automatic gender recognition framework based on the Psycho-visual experiment. The proposed framework is tested on FERET database and is shown to achieve a high recognition accuracy.

Paper Nr: 25
Title:

Combining Paraconsistency and Probability in CTL

Authors:

Norihiro Kamide and Daiki Koizumi

Abstract: Computation tree logic (CTL) is known to be one of the most useful temporal logics for verifying concurrent systems by model checking technologies. However, CTL is not sufficient for handling inconsistency-tolerant and probabilistic accounts of concurrent systems. In this paper, a paraconsistent (or inconsistency-tolerant) probabilistic computation tree logic (PpCTL) is derived from an existing probabilistic computation tree logic (pCTL) by adding a paraconsistent negation connective. A theorem for embedding PpCTL into pCTL is proven, which indicates that we can reuse existing pCTL-based model checking algorithms. Some illustrative examples involving the use of PpCTL are also presented.

Paper Nr: 43
Title:

Improvement of n-ary Relation Extraction by Adding Lexical Semantics to Distant-Supervision Rule Learning

Authors:

Hong Li, Sebastian Krause, Feiyu Xu and Andrea Moro

Abstract: A new method is proposed and evaluated that improves distantly supervised learning of pattern rules for n-ary relation extraction. The new method employs knowledge from a large lexical semantic repository to guide the discovery of patterns in parsed relation mentions. It extends the induced rules to semantically relevant material outside the minimal subtree containing the shortest paths connecting the relation entities and also discards rules without any explicit semantic content. It significantly raises both recall and precision with roughly 20% f-measure boost in comparison to the baseline system which does not consider the lexical semantic information.

Paper Nr: 44
Title:

Reordering Variables using ‘Contribution Number’ Strategy to Neutralize Sudoku Sets

Authors:

Saajid Abuluaih and Azlinah Hj. Mohamed

Abstract: Humans tend to form decisions intuitively, often based on experience, and without considering optimality; sometimes, search algorithms and their strategies apply the same approach. For example, the minimum remaining values (MRV) strategy selects Sudoku squares based on their remaining values; squares with less number of values are selected first, and the search algorithm continues solving squares until the Sudoku rule is violated. Then, the algorithm reverses the steps and attempts different values. The MRV strategy reduces the backtracking rate; however, when there are two or more blank squares with the same number of minimum values, such strategy selects any of these blank squares randomly. In addition, MRV continues to target squares with minimum values, ignoring that some of those squares could be considered ‘solved’ when they have no influence on other squares. Hence, we aim to introduce a new strategy called Contribution Number (CtN) with the ability to evaluate squares based on their influence on others candidates to reduce squares explorations and the backtracking rate. The results show that the CtN strategy behaves in a more disciplined manner and outperforms MRV in most cases.

Paper Nr: 49
Title:

Implementation of a Realtime Event-location Analyzer

Authors:

Junyeob Yim

Abstract: A Social Networking Service (SNS) is a web-based platform that helps to build or to keep relationships among people. The SNS platforms in early stage including Friendster and MySpace were implemented for the desktop and laptop users. As more people access wireless internet using their mobile phones, SNS platforms can also have some important features such as “real-time access” and “location information”. These two features make it possible to let people share their activities, interests, and observations in real-time at any places. Recently, most of SNS platforms including Twitter, Facebook, and Yelp use the location information of users. Therefore, if we consider a SNS user as a sensor that reports its observations at a specific location, it would be possible to detect events by analyzing their social contents. There are already numbers of research on this topic have been published or still ongoing. Twitter has been widely used for conducting the research because it has important three features which are required to detect an event: time, location, and content. However, the most approaches struggle with detecting the location which is related to an event correctly. In this paper, we introduce a system that detects an event with its location in real-time based on increment of tweets that mention a specific location frequently. The result of performance evaluation shows that the proposed system detects an event in real-time. We also improved the system performance by reducing some noises from our system.

Paper Nr: 74
Title:

Building Emotional Agents for Strategic Decision Making

Authors:

Bexy Alfonso and Emilio Vivancos

Abstract: Experimental economics has many works that demonstrate the influence of emotions and affective issues on the process of human strategic decision making. Personality, emotions and mood produce biases on what would be considered the strategic solution (Nash equilibrium) to many games. %CAMBIO% Thus considering these issues on simulations of human behavior may produce results more aligned with real situations. We think that computational agents are a suitable %CAMBIO% technology to simulate such phenomena. We propose to use O3A, an Open Affective Agent Architecture to model rational and affective agents, in order to perform simulations where agents must take decisions as close as possible to humans. The approach evaluation is performed trough the classical `prisoner dilemma' and `trust' games.

Paper Nr: 85
Title:

Inconsistency-based Ranking of Knowledge Bases

Authors:

Said Jabbour

Abstract: Inconsistencies are a usually undesirable feature of many kinds of data and knowledge. Measuring inconsistency is potentially useful to determine which parts of the data or of the knowledge base are conflicting. Several measures have been proposed to quantify such inconsistencies. However, one of the main problems lies in the difficulty to compare their underlying quality. Indeed, a highly inconsistent knowledge base with respect to a given inconsistency measure can be considered less inconsistent using another one. In this paper, we propose a new framework allowing us to partition a set of knowledge bases as a sequence of subsets according to a set of inconsistency measures, where the first element of the partition corresponds to the most inconsistent one. Then we discuss how finer ranking between knowledge bases can be derived from an original combination of existing measures. Finally, we extend our framework to provide some inconsistency measures obtained by combining existing ones.

Paper Nr: 97
Title:

Face and Facial Expression Recognition - Fusion based Non Negative Matrix Factorization

Authors:

Humayra Binte Ali and David M. W. Powers

Abstract: Face and facial expression recognition is a broad research domain in machine learning domain. Non-negative matrix factorization (NMF) is a very recent technique for data decomposition and image analysis. Here we propose face identification system as well as a facial expression recognition, which is a system based on NMF. We get a significant result for face recognition. We test on CK+ and JAFFE dataset and we find the face identification accuracy is nearly 99% and 96.5% respectively. But the facial expression recognition (FER) rate is not as good as it required for the real life implementation. To increase the detection rate for facial expression recognition, our propose fusion based NMF, named as OEPA-NMF, where OEPA means Optimal Expression specific Parts Accumulation. Our experimental result shows OEPA-NMF outperforms the prevalence NMF for facial expression recognition. As face identification using NMF has a good accuracy rate, so we are not interested to apply OEPA-NMF for face identification.

Paper Nr: 102
Title:

Using Coalitions with Stochastic Search to Solve Distributed Constraint Optimization Problems

Authors:

Nathaniel Gemelli and Jeffrey Hudack

Abstract: Distributed Constraint Optimization Problems (DCOP) provide a convenient way of representing multi-agent decision problems. Many stochastic search methods have been developed for solving DCOPs, however, as problem size grows and becomes more complex, fully decentralized stochastic methods tend to suffer from a degradation in solution quality. We present Propensity-based Coalition-DSA (PC-DSA), a new coalition formation algorithm for solving DCOPs that uses stochastic search and mitigates the degradation of solution quality seen in large, complex problems. We introduce a Markov network formulation of the k-Coloring problem and show how a structure-based estimation of the Markov network can be used by agents during the coalition formation process to find partners with high propensity towards one another; Agents that have a high probability of having the same joint assignment when the problem is solved. This allows for a single agent in the coalition to solve the problem for all members of the coalition using simple stochastic search. We report empirical results that show solution quality gains over non-coalition forming stochastic search in the distributed k-Coloring problem and discuss how we can generalize propensity to apply to general DCOPs.

Paper Nr: 140
Title:

System for Intrusion Detection with Artificial Neural Network

Authors:

Jose Ernesto Luna

Abstract: With the rapid expansion of computer networks during the past decade, security has become a crucial issue for computer systems. Different soft-computing based methods have been proposed in recent years for the development of intrusion detection systems. This paper presents a neural network approach to intrusion detection. A Multi-Layer Perceptron (MLP) is used for intrusion detection based on an off-line analysis approach. While most of the previous studies have focused on classification of records in one of the two general classes - normal and attack, this research aims to solve a multi class problem in which the type of attack is also detected by the neural network. Different neural network structures are analyzed to find the optimal neural network with regards to the number of hidden layers. An early stopping validation method is also applied in the training phase to increase the generalization capability of the neural network. The results show that the designed system is capable of classifying records with about 91% accuracy with two hidden layers of neurons in the neural network and 87% accuracy with one hidden layer.

Paper Nr: 148
Title:

Communicative Strategy in a Formal Model of Dispute

Authors:

Mare Koit

Abstract: We study human-human dialogues in a natural language where the communicative goal of the initiator of dialogue is to bring the partner to a decision to do a certain action. If the partner does not accept the goal then dispute will start. Arguments for and against of doing the action will be presented by the participants and finally, one of them wins and another loses the dispute. We present a formal model of dispute which includes a model of argument. We discuss involvement of the notion of communicative strategy in the model. A communicative strategy is considered as an algorithm used by a participant for achieving his or her communicative goal. A communicative strategy determines also how a participant is moving in ‘communicative space’ during interaction. Communicative space is characterized by a number of coordinates (e.g. social distance between participants, intensity of communication, etc.). A limited version of the model of dispute is implemented on the computer.

Paper Nr: 149
Title:

Checking Models for Activity Recognition

Authors:

Martin Nyolt

Abstract: Model checking is well established in system design and business process modelling. Model checking ensures and automatically proves safety and soundness of models used in day-to-day systems. However, the need for model checking in activity recognition has not been realised. Models for activity recognition can be built by prior knowledge. They can encode typical behaviour patterns and allow causal reasoning. As these models are manually designed they suffer from modelling errors. To address the problem, we discuss different classes of sensible properties and evaluate three different models for activity recognition. In all cases, modelling errors and inconsistencies have been found.

Paper Nr: 160
Title:

Formalizing the Qualitative Superposition of Rectangles in Proof Assistant Isabelle/HOL

Authors:

Fadoua Ghourabi and Kazuko Takahashi

Abstract: We formalize and verify the superposition of rectangles in Isabelle/HOL. The superposition is associated with the arrangement of rectangular software windows while keeping some regions visible and other hidden. We adopt a qualitative spatial reasoning approach to represent these rectangles and the relations between their regions. The properties of the model are formally proved and show some characteristics of superposition operation. Although, this work is limited to 29 structures of rectangles, the superpositions produce hundreds of cases that are tedious to tackle in Isabelle/HOL. We also explain our strategy to optimize the proofs.

Paper Nr: 167
Title:

Complex Character - Model for a Non Player AI Character for Interactive Narrative Discourse

Authors:

Hee Holmen

Abstract: Non Player Characters (NPC) in Interactive Drama, Façade, are built based on the Believable Agent model. This model is made for effectively managing character behaviour, as believability is expressed by visible actions. Yet NPCs in Façade do not render their 'rich characters.' The dialogues do not respond well enough to express any complexities the characters may have. For dramatic narratives, authors in Interative Narrative (IN) need ways to reveal complex characters. How can AI be used to build a complex character for interaction? More importantly, how should these complexities be revealed to the reader? This paper proposes design contexts for a complex Non Player Character (NPC) for the interactive comics framework, Cyber Comix.

Paper Nr: 176
Title:

Using Domain Knowledge to Improve Intelligent Decision Support in Intensive Medicine - A Study of Bacteriological Infections

Authors:

Rui Veloso, Filipe Portela, Manuel Filipe Santos, Álvaro Silva and Fernando Rua

Abstract: Nowadays antibiotic prescription is object of study in many countries. The rate of prescription varies from country to country, without being found the reasons that justify those variations. In intensive care units the number of new infections rising each day is caused by multiple factors like inpatient length of stay, low defences of the body, chirurgical infections, among others. In order to complement the support of the decision process about which should be the most efficient antibiotic it was developed a heuristic based in domain knowledge extracted from biomedical experts. This algorithm is implemented by intelligent agents. When an alert appear on the presence of a new infection, an agent collects the microbiological results for cultures, it permits to identify the bacteria, then using the rules it searches for a role of antibiotics that can be administered to the patient, based on past results. At the end the agent presents to physicians the top-five sets and the success percentage of each antibiotic. This paper presents the approach proposed and a test with a particular bacterium using real data provided by an Intensive Care Unit.

Paper Nr: 178
Title:

Predicting the Risk Associated to Pregnancy using Data Mining

Authors:

Andreia Brandão, Eliana Pereira, Filipe Portela and Manuel Santos

Abstract: Woman willing to terminate pregnancy should in general use a specialized health unit, as it is the case of Maternidade Júlio Dinis in Porto, Portugal. One of the four stages comprising the process is evaluation. The purpose of this article is to evaluate the process of Voluntary Termination of Pregnancy and, consequently, identify the risk associated to the patients. Data Mining (DM) models were induced to predict the risk in a real environment. Three different techniques were considered: Decision Tree (DT), Support Vector Machine (SVM) and Generalized Linear Models (GLM) to perform the classification task. Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology was applied to drive this work. Very promising results were obtained, achieving a sensitivity of approximately 93%.

Area 2 - Agents

Full Papers
Paper Nr: 38
Title:

Hybrid POMDP-BDI - An Agent Architecture with Online Stochastic Planning and Desires with Changing Intensity Levels

Authors:

Gavin Rens and Thomas Meyer

Abstract: Partially observable Markov decision processes (POMDPs) and the belief-desire-intention (BDI) framework have several complimentary strengths. We propose an agent architecture which combines these two powerful approaches to capitalize on their strengths. Our architecture introduces the notion of intensity of the desire for a goal’s achievement. We also define an update rule for goals’ desire levels. When to select a new goal to focus on is also defined. To verify that the proposed architecture works, experiments were run with an agent based on the architecture, in a domain where multiple goals must continually be achieved. The results show that (i) while the agent is pursuing goals, it can concurrently perform rewarding actions not directly related to its goals, (ii) the trade-off between goals and preferences can be set effectively and (iii) goals and preferences can be satisfied even while dealing with stochastic actions and perceptions. We believe that the proposed architecture furthers the theory of high-level autonomous agent reasoning.

Paper Nr: 51
Title:

Parallel Shortest-path Searches in Multiagent-based Simulations with PlaSMA

Authors:

Max Gath

Abstract: The goods structure effect increases the complexity and dynamics of logistic processes. To handle the resulting challenges and requirements, planning and controlling of logistic processes have to be reliable and adaptive. Especially in these dynamic environments, Multiagent-Based Simulation (MABS) is a suitable approach to support decision makers in order to evaluate the companies' processes and to identify optimal decisions. This paper presents the PlaSMA multiagent simulation platform, which has been developed for the evaluation of logistics scenarios and strategic analyses. As shortest-path searches are an essential but cost intensive part of the agents for the simulation of transport processes, we focus on the parallel application of a state-of-the-art Hub Labeling algorithm, which is combined with Contraction Hierarchies. The results show, that the optimal number of concurrently running routing agents is restricted by available cores and/or the number of agents running physically concurrently. Moreover, by slightly restricting the agents' autonomy a significant increase in runtime performance can be achieved without losing the advantages of agent-based simulations. This allows to simulate large real-world transport scenarios with MABS and low hardware requirements.

Paper Nr: 64
Title:

Coalition Formation for Simulating and Analyzing Iterative Prisoner’s Dilemma

Authors:

Udara Weerakoon

Abstract: In this paper, we analyze the strictly competitive iterative version of the non-zero-sum two player game, the Prisoner’s Dilemma. This was accomplished by simulating the players in a memetic framework. Our primary motivation involves solving the tragedy of the commons problem, a dilemma in which individuals acting selfishly destroy the shared resources of the population. In solving this problem, we identify strategies for applying coalition formation to the spatial distribution of cooperative or defective agents. We use two reinforcement learning methods, temporal difference learning and Q-learning, on the agents in the environment. This overcomes the negative impact of random selection without cooperation between neighbors. Agents of the memetic framework form coalitions in which the leaders make the decisions as a way of improving performance. By imposing a reward and cost schema to the multiagent system, we are able to measure the performance of the individual leader as well as the performance of the organization.

Paper Nr: 68
Title:

User Perceptions of Communicative and Task-competent Agents in a Virtual Basketball Game

Authors:

Divesh Lala

Abstract: In this paper, we describe a virtual basketball game where a human and an embodied agent can play together as a team. Our goal is to investigate whether the human prefers an agent who is highly competent at basketball or one which is not as competent but tries to actively communicate through body movements. The virtual basketball game was implemented using a Kinect to sense body movements and a pressure sensor for handsfree navigation. In order to create an agent who could react to a user’s body movements, we designed an agent model based on joint activity theory. We performed an experiment where participants would play virtual basketball with each agent and evaluated them through questionnaires. It was found that participants preferred the agent which tried to communicate more with the user, even though they could distinguish that the other agent was better at playing basketball. We propose that communication capability for these types of agents is crucial, even at the expense of some task ability.

Paper Nr: 73
Title:

Why Robots Failed - Demonstrating the Superiority of Multiple-order Trading Agents in Experimental Human-agent Financial Markets

Authors:

Marco De Luca

Abstract: In the past decade there has been a rapid growth of the use of adaptive automated trading systems, commonly referred to in the finance industry as ``robot traders'': AI applications replacing highly-paid human traders in the global financial markets. The academic roots of this industry-changing deployment of AI technologies can be traced back to research published by a team of researchers at IBM at IJCAI 2001, which was subsequently replicated and extended by De Luca and Cliff at IJCAI 2011 and ICAART 2011. Here, we focus on the order management policy enforced by Open Exchange (OpEx), the open source algorithmic trading system designed by De Luca, for both human and robot traders: while humans are allowed to manage multiple orders simultaneously, robots only deal with one order at the time. We hypothesise that such unbalance may have strongly influenced the victory of human traders over robot traders, reported in past studies by De Luca et al., and by Cartlidge and Cliff. We employed OpEx to implement a multiple-order policy for robots as well as humans, and ran several human vs. robot trading experiments. Using aggregated market metrics and time analysis, we reached two important conclusions. First, we demonstrated that, in mixed human-robot markets, robots dealing multiple simultaneous orders consistently outperform robots dealing one order at a time. And second, we showed that while human traders outperform single-order robot traders under specific circumstances, multiple-order robot traders are never outperformed by human traders. We thus conclude that the performance of robot traders in a human-robot mixed market is strongly influenced by the order management policy they employ.

Paper Nr: 77
Title:

DipBlue: A Diplomacy Agent with Strategic and Trust Reasoning

Authors:

André Ferreira

Abstract: Diplomacy is a multi-player strategic and zero-sum board game, free of random factors, and allowing negotiation among players. The majority of existing artificial players (bots) for Diplomacy do not exploit the strategic opportunities enabled by negotiation, instead trying to decide their moves through solution search and the use of complex heuristics. We present DipBlue, an approach to the development of an artificial player that uses negotiation in order to gain advantage over its opponents, through the use of peace treaties, formation of alliances and suggestion of actions to allies. A simple trust assessment approach is used as a means to detect and react to potential betrayals by allied players. DipBlue was built to work with DipGame, a multi-agent systems testbed for Diplomacy, and has been tested with other players of the same platform and variations of itself. Experimental results show that the use of negotiation increases the performance of bots involved in alliances, when full trust is assumed. In the presence of betrayals, being able to perform trust reasoning is an effective approach to reduce their impact.

Paper Nr: 83
Title:

JChoc DisSolver - Bridging the Gap Between Simulation and Realistic Use

Authors:

Imade Benelallam, Zakarya Erraji and Ghizlaneg Elkhattabi

Abstract: The development of innovative and intelligent multiagent applications based on Distributed Constraints Reasoning techniques is obviously a fastidious task, especially to tackle new combinatorial problems (e.i. distributed resource management, distributed air traffic management, Distributed Sensor Network (Bejar et al., ´ 2005)). However, there are very few open-source platforms dedicated to solve such problems within realistic uses. Given the difficulty that researchers are facing, simplifying assumptions and simulations uses are commonly used techniques. Nevertheless, these techniques may not be able to capture all the details about the problem to be solved. Hence, transition from the simulation to the actual development context causes a loss of accuracy and robustness of the applications to be implemented. In this paper, we present preliminary results of a new distributed constraints programming platform, namely JChoc DisSolver. Thanks to the extensibility of JADE communication model and the robustness of Choco Solver, JChoc brings a new added value to Distributed Constraints Reasoning. The platform is user-friendly and the development of multiagent applications based on Constraints Programming is no longer a mystery to users. A real distributed problem is used to illustrate how the platform can be appropriated by an unsophisticated user and the experimental results are encouraging for more investigations.

Paper Nr: 92
Title:

From Simulation to Development in MAS - A JADE-based Approach

Authors:

João Lopes and Henrique Lopes Cardoso

Abstract: Multi-agent systems (MAS) present an effective approach to the efficient development of modular systems composed of interacting agents. Several frameworks exist that aid the development of MAS, but they are often not very appropriate for some kind of uses, such as for Multi-Agent-based Simulation (MABS). Other frameworks exist for running simulations, sharing little with the former. While open agent-based applications benefit from adopting development and interaction standards, such as those proposed by FIPA, most MABS frameworks do not support them. In this paper we propose an approach to bridge the gap between the development and simulation of MAS, by putting forward two complementary tools. The Simple API for JADE-based Simulations (SAJaS) enhances MABS frameworks with JADE-based features, and the MAS Simulation to Development (MASSim2Dev) tool allows the automatic conversion of a SAJaS-based simulation into a JADE MAS, and vice-versa. Repast Simphony was used as the base MABS framework. Our proposal provides increased simulation performance while enabling JADE programmers to quickly develop their simulation models using familiar concepts. Validation tests demonstrate the significant performance gain in using SAJaS with Repast Simphony when compared with JADE and show that using MASSim2Dev preserves the original functionality of the system.

Paper Nr: 113
Title:

From Formal Modelling to Agent Simulation Execution and Testing

Authors:

Ilias Sakellariou and Dimitris Dranidis

Abstract: This work presents an approach to agent-based simulation development using formal modelling, i.e. stream X-Machines, that combines the power of executable specifications and test case generation. In that respect, a domain specific language is presented for effortlessly encoding agent behaviour as a stream X-Machine in a well known simulation platform. The main benefits in using the specific formal approach in such a practical setting, apart from the fact that it offers a clear, intuitive way for specifying agent behaviour, is the existence of tools for test case generation, that allow to systematically generate “agent simulation test scenarios”, i.e. sequences of agent inputs that can be used for validation.

Paper Nr: 154
Title:

Integrating Adaptation Patterns into Agent Methodologies to Build Self-adaptive Systems

Authors:

Mariachiara Puviani

Abstract: Agent systems represent a very good example of complex and self-adaptive systems. Adaptation must be conceived not only at the level of single components, but also at the system level, where adaptation must concern the entire structure of one system; adaptation patterns have been proposed to address both levels. Many methodologies have been proposed to support developers in their work, but they lack in addressing the choice and the exploitation of adaptation patterns. In this work, we propose an integration of adaptation patterns in agent-oriented methodologies, exploiting an existing methodology to concretely show how such an integration can be enacted.

Short Papers
Paper Nr: 20
Title:

Dynamic Task Allocation for Human-robot Teams

Authors:

Tinka R. A. Giele and Tina Mioch

Abstract: Artificial agents, such as robots, are increasingly deployed for teamwork in dynamic, high-demand environments. This paper presents a framework, which applies context information to establish task (re)allocations that improve human-robot team’s performance. Based on the framework, a model for adaptive automation was designed that takes the cognitive task load (CTL) of a human team member and the coordination costs of switching to a new task allocation into account. Based on these two context factors, it tries to optimize the level of autonomy of a robot for each task. The model was instantiated for a single human agent cooperating with a single robot in the urban search and rescue domain. A first experiment provided encouraging results: the cognitive task load of participants mostly reacted to the model as intended. Recommendations for improving the model are provided, such as adding more context information.

Paper Nr: 41
Title:

An Inflation / Deflation Model for Price Stabilization in Networks

Authors:

Jun Kiniwa

Abstract: We consider a simple network model for economic agents where each can buy goods in the neighborhood. Their prices may be initially distinct in any node. However, by assuming some rules on new prices, we show that the distinct prices will reach an equilibrium price by iterating buy and sell operations. First, we present a protocol model in which each agent always bids at some rate in the difference between his own price and the lowest price in the neighborhood. Next, we show that the equilibrium price can be derived from the total funds and the total goods for any network. This confirms that the inflation / deflation occurs due to the increment / decrement of funds as long as the quantity of goods is constant. Finally, we consider how injected funds spread in a path network because sufficient funds of each agent drive him to buy goods. This is a monetary policy for deflation. A set of recurrences lead to the price of goods at each node at any time. Then, we compare two injections with half funds and single injection. It turns out the former is better than the latter from a fund-spreading point of view, and thus it has an application to a monetary policy and a strategic management based on the information of each agent.

Paper Nr: 69
Title:

Semantic Interoperability for Web Services based Smart Home Systems

Authors:

Hannu Järvinen and Petri Vuorimaa

Abstract: One of the key issues in smart home systems is the lack of interoperability between available solutions. While data representation and communication formats need to be unified, the goal is to provide intelligence for the system control. In this paper, we present a solution for Web services based smart home systems to publish semantic data, and to communicate with agent systems. Communication of browser-based agents with a smart home system is demonstrated. Efficiency of the system is measured and the results provided. The presented solution provides a basis for exploiting intelligence from multi-agent systems in smart home system control.

Paper Nr: 70
Title:

Defending Autonomous Agents Against Attacks in Multi-Agent Systems Using Norms

Authors:

Jan Kantert, Sarah Edenhofer and Sven Tomforde

Abstract: The Trusted Desktop Grid (TDG) is a self-organised, agent-based organisation, where agents perform computational tasks for others to increase their performance. In order to establish a fair distribution and provide counter-measures against egoistic or malicious elements, technical trust is used. A fully self-organised approach can run into disturbed states such as a trust breakdown of the system that lead to unsatisfying system performance although the majority of participants is still behaving well. We previously introduced an additional system-wide control loop to detect and alleviate disturbed situations. Therefore, we describe an Observer/Controller loop at system level that monitors the system status and intervenes if necessary. This paper focuses on the controller part which instantiates norms as reaction to observed suspicious situations. We demonstrate the benefit of our approach within a Repast-based simulation of the TDG. Therein, the impact of disturbances on the system performance is decreased significantly and the time to recover is shortened.

Paper Nr: 103
Title:

Linear Algebraic Semantics for Multi-agent Communication

Authors:

Ryo Hatano

Abstract: When we study multi-agent communication system, it forces us to manage an existence of communication channels between agents, such as phone numbers or e-mail addresses, while ordinary modal logic for multi-agent system does not consider the notion of channel. This paper proposes a decidable and semantically complete logic of belief with communication channels, and then expands the logic with informing action operators to change agents’ beliefs via communication channels. Moreover, for a better formalism for handling these semantics efficiently, we propose a linear algebraic representation of these. That is, with the help of Fitting (2003) and van Benthem and Liu (2007), we reformulate our proposed semantics of the doxastic static logic and its dynamic extensions in terms of boolean matrices. We also implement and publicize a calculation system of our matrix reformulations as an open system on the web.

Paper Nr: 122
Title:

Agents Displacement in Arbitrary Geometrical Spaces - An Evolutionary Computation based Approach

Authors:

Francesco D'Aleo, Fabio D'Asaro and Valerio Perticone

Abstract: In many different social contexts, communication allows a collective intelligence to emerge. However, a correct way of exchanging information usually requires determined topological configurations of the agents involved in the process. Such a configuration should take into account several parameters, e.g. agents positioning, their proximity and time efficiency of communication. Our aim is to present an algorithm, based on evolutionary programming, which optimizes agents placement on arbitrarily shaped areas. In order to show its ability to deal with arbitrary bi-dimensional topologies, this algorithm has been tested on a set of differently shaped areas that present concavities, convexities and obstacles. This approach can be extended to deal with concrete cases, such as object localization in a delimited area.

Paper Nr: 139
Title:

Towards a Resource-based Model of Strategy to Help Designing Opponent AI in RTS Games

Authors:

Juliette Lemaitre

Abstract: The artificial intelligence used for opponent non-player characters in commercial real-time strategy games is often criticized by players. It is used to discover the game but soon becomes too easy and too predictable. Yet, a lot of research has been done on the subject, and successful complex behaviors have been created, but the systems used are too complicated to be used by the video games industry, as they would need time for the game designer to learn how they function, which ultimately proves prohibitive. Moreover these systems often lack control for the game designer to be adapted to the desired behavior. To address the issue, we propose an accessible strategy model that can adapt itself to the player and can be easily created and modified by the game designer.

Paper Nr: 158
Title:

Multi-Agent Approach for Controlling Robots Marching in a File - A Simulation

Authors:

Yasushi Kambayashi

Abstract: It is a fundamental concern for multi-robot system research community how to explore unknown environments. This paper presents an approach for controlling cooperative multiple robots exploration in an unknown environment. The approach we are proposing aims to minimizing the overall exploration cost for multiple robots that march in procession. In order to achieve the goal, the file of multiple robots must be able to effectively come out of dead-ends while exploring a maze-like environment. The proposed approach employs multiple mobile software agents that can migrate from a robot to another robot freely to bring certain role and ability to a robot. In particular, the mobile software agent brings the role of leader to an arbitrary robot in a file, so that the migrated robot becomes the leader of a subgroup of the robots that can march in a file into a part of environment. In order to demonstrate the effectiveness of our approach, we have built a simulator, and partially constructed a real multi-robot system.

Paper Nr: 162
Title:

A Multiagent Based Approach to Money Laundering Detection and Prevention

Authors:

Cláudio Alexandre and João Balsa

Abstract: The huge amount of bank operations that occur every day makes it extremely hard for financial institutions to spot malicious money laundering related operations. Although some predefined heuristics are used they aren’t restrictive enough, still leaving to much work for human analyzers. This motivates the need for intelligent systems that can help financial institutions fight money laundering in a diversity of ways, such as: intelligent filtering of bank operations, intelligent analysis of suspicious operations, learning of new detection and analysis rules. In this paper, we present a multiagent based approach to deal with the problem of money laundering by defining a multiagent system designed to help financial institutions in this task, helping them to deal with two main problems: volume and rule improvement. We define the agent architecture, and characterize the different types of agents, considering the distinct roles they play in the process.

Paper Nr: 163
Title:

Privacy Risk Assessment of Textual Publications in Social Networks

Authors:

David Sanchez and Alexandre Viejo

Abstract: Recent studies have warned that, in Social Networks, users usually publish sensitive data that can be exploited by dishonest parties. Some mechanisms to preserve the privacy of the users of social networks have been proposed (i.e. controlling who can access to a certain published data); however, a still unsolved problem is the lack of proposals that enable the users to be aware of the sensitivity of the contents they publish. This situation is especially true in the case of unstructured textual publications (i.e., wall posts, tweets, etc.). These elements are considered to be particularly dangerous from the privacy point of view due to their dynamism and high informativeness. To tackle this problem, in this paper we present an automatic method to assess the sensitivity of the user’s textual publications according to her privacy requirements towards the other users in the social network. In this manner, users can have a clear picture of the privacy risks inherent to their publications and can take the appropriate countermeasures to mitigate them. The feasibility of the method is studied in a highly sensitive social network: PatientsLikeMe.

Paper Nr: 165
Title:

Extreme Sensitive Robotic - A Context-Aware Ubiquitous Learning

Authors:

Nicolas Verstaevel, Christine Régis and Valérian Guivarch

Abstract: Our work focuses on Extreme Sensitive Robotic that is on multi-robot applications that are in strong interaction with humans and their integration in a highly connected world. Because human-robots interactions have to be as natural as possible, we propose an approach where robots Learn from Demonstrations, memorize contexts of learning and self-organize their parts to adapt themselves to new contexts. To deal with Extreme Sensitive Robotic, we propose to use both an Adaptive Multi-Agent System (AMAS) approach and a Context-Learning pattern in order to build a multi-agent system ALEX (Adaptive Learner by Experiments) for contextual learning from demonstrations.

Paper Nr: 168
Title:

Butler-ising HomeManager - A Pervasive Multi-Agent System for Home Intelligence

Authors:

Enrico Denti and Roberta Calegari

Abstract: Home Manager is an agent-based application for the control of an intelligent home, where the house is seen as an intelligent environment made of independent devices that participate to an agent society. The society is governed by a coordination infrastructure aimed at satisfying the user's goals and preferences (lighting, temperature, etc.) while achieving the global house policies and objectives (e.g. energy saving) in a highly-configurable way. In the existing prototype, designed mostly to prove the feasibility and effectiveness of the above approach, the testbed house was kept intentionally simple, with a limited number of rooms, user types, control devices and policies, and the infrastructure implementation lacked some features. The recent, widespread adoption of smart mobile devices (smartphones, tablets) enabling mobile connectivity has dramatically changed the reference scenario: users now expect at least to be able to monitor, and possibly control, their home devices in mobility, and in fact all major vendors now offer some app for this purpose. Yet, this is just the basic step: exploiting the situated connectivity enabled by GPS and the other geo-localisation techniques embedded in today's smartphones, novel pervasive scenarios can be devised that could not even be imagined in the past years. This aspect is developed in the Butlers architecture, which provides a general framework and reference model for intelligent home management where the smart home is managed by an intelligent butler and interacts with its inhabitants taking into account their habits, behavior, location, preferences and any other sort of information to anticipate their needs and support their goals. In this context, this paper presents the novel ``Butler-ised' Home Manager, that evolves the previous system in the Butlers perspective: the new prototype not only supports the remote control of the house appliances via an Android app, but exploits the user position, tracked via geo-localisation, to anticipate the user's needs in a simple, yet significant, scenario -- namely, autonomously switching the house oven on when discovering that the user has just bought a take-away pizza in his/her way back home.

Paper Nr: 172
Title:

Agent-based Modelling for Green Space Allocation in Urban Areas - Factors Influencing Agent Behaviour

Authors:

Marta Vallejo

Abstract: The task of green space allocation in urban areas consists of identifying a suitable site for allocating green areas. In this proposition paper we discuss about a number of factors like crowdedness, design, distribution and size that could discourage inhabitants to visit a certain green urban area. We plan to cluster our urban residents into several population segments using an Agent-Based Model and study the system in different predefined scenarios. The overall objective of this work is to provide spatial guidance to planners, policy makers and other stakeholders, and shed light on potential policy conflicts among standard policy criteria and user preferences. We will evaluate this potential within a targeted stakeholder workshop.

Paper Nr: 184
Title:

Autonomous Pareto Front Scanning using an Adaptive Multi-Agent System for Multidisciplinary Optimization

Authors:

Julien Martin and Jean-Pierre Georgé

Abstract: Multidisciplinary Design Optimization (MDO) problems can have a unique objective or be multi-objective. In this paper, we are interested in MDO problems having at least two conflicting objectives. This characteristic ensures the existence of a set of compromise solutions called Pareto front. We treat those MDO problems like Multi-Objective Optimization (MOO) problems. Actual MOO methods suffer from certain limitations, especially the necessity for their users to adjust various parameters. These adjustments can be challenging, requiering both disciplinary and optimization knowledge. We propose the use of the Adaptive Multi-Agent Systems technology in order to automatize the Pareto front obtention. ParetOMAS (Pareto Optimization Multi-Agent System) is designed to scan Pareto fronts efficiently, autonomously or interactively. Evaluations on several academic and industrial test cases are provided to validate our approach.

Posters
Paper Nr: 5
Title:

Action Preparation and Replanning in Manipulation

Authors:

Hisashi Hayashi and Hideki Ogawa

Abstract: In order to pick (place) a target object from (on) a shelf, a service robot moves to the front side of the shelf, removes obstacles, and reaches out a hand. If the robot prepares for the next arm manipulation while moving to the shelf, it is possible to save time for plan execution. The robot also needs to replan if, for example, a person removes obstacles for the robot. After replanning, the robot might need to suspend the current action execution or next action preparation before executing the updated plan. This paper introduces a method to integrate planning, action execution, speculative next action preparation, replanning, and action suspension based on Hierarchical Task Network (HTN) planning. We also show that this method is effective for pick-andplace manipulation in dynamic environments.

Paper Nr: 57
Title:

Study of Human Activity Related to Residential Energy Consumption Using Multi-level Simulations

Authors:

Thomas Huraux

Abstract: In this paper, we illustrate how multi-agent multi-level modeling can help energy experts to better understand and anticipate residential energy consumption. The problem we study is the anticipation of electricity consumption peaks. We explain in this context the benefit of the coexistence of microscopic (human activity) and macroscopic (social characteristics, overall consumption) levels of representation. We present briefly the SIMLAB model (Huraux et al., 2014) that extends the SMACH simulator (Amouroux et al., 2013) with coexisting levels on different modeling axes. We then present a model of the households activity and its electrical consumption consistent with energy experts’ observations in the residential sector. We show the impact of different social factors, such as individual sensitivity to price or to personal comfort, on the apparition of peaks on the consumption. We illustrate the contribution of multi-level modeling in the understanding of macroscopic phenomena.

Paper Nr: 71
Title:

Extending the MASITS Methodology for General Purpose Agent Oriented Software Engineering

Authors:

Egons Lavendelis

Abstract: The aim of the paper is to extend the agent oriented software engineering methodology MASITS that was initially developed for agent based Intelligent Tutoring System (ITS) development to make it usable for other agent oriented system development. The paper analyses the steps of the methodology, finds the specific ones that are either adapted to ITS characteristics or use particular artefacts from ITS research. Three extensions of the methodology have been developed, namely, a general holonic architecture, agent definition method and interaction design method. As a result, the extended version of the methodology can be used for agent oriented system development in case the system has similar characteristics to agent based ITSs. Case study of the insurance policy market automation software is used to validate the use of the extended version in the development of other kind of systems than ITSs.

Paper Nr: 72
Title:

Vessel Rotation Planning - A Layered Distributed Constraint Optimization Approach

Authors:

Shijie Li

Abstract: Vessel rotation planning concerns the problem of assigning rotations to vessels over a number of terminals for loading and unloading containers in a large port. Vessel operators and terminal operators communicate with each other to make appointments about the rotation plans for the vessels. However, it happens frequently that these appointments cannot be met. Thus, it is important to generate the rotation plans for the vessel operators in an efficient automated way. In this paper, we propose an approach to solve the vessel rotation planning problem by modeling the problem as a layered distributed constraint optimization problem (DCOP). To evaluate the performance of the proposed approach, combinations of three DCOP algorithms are considered, namely, Asynchrounous Forward Bounding, Synchrounous Branch and Bound, and Dynamic Programming Optimization Protocol. We evaluate the solution quality and computational and communication costs of these three algorithms when solving the vessel rotation planning problem using the proposed layered formulation.

Paper Nr: 108
Title:

Plan-belief Revision in Jason

Authors:

Andreas Schmidt Jensen and Jørgen Villadsen

Abstract: When information is shared between agents of unknown reliability, it is possible that their belief bases become inconsistent. In such cases, the belief base must be revised to restore consistency, so that the agent is able to reason. In some cases the inconsistent information may be due to use of incorrect plans. We extend work by Alechina et al. to revise belief bases in which plans can be dynamically added and removed. We present an implementation of the algorithm in the AgentSpeak implementation Jason.

Paper Nr: 118
Title:

A Framework to Mitigate Debugging Difficulty on Agent Migration

Authors:

Shin Osaki, Masayuki Higashino and Kenichi Takahashi

Abstract: A mobile agent is an autonomous software module that can work on different computers and migrate among these computers. The characteristics of a mobile agent, migration and interaction, are helpful to implement distributed systems. In the real world, however, a mobile agent is not widely used because its migration makes debugging distributed systems difficult. Therefore, in this paper, we discuss problems in debugging a mobile agent system and propose a framework that includes a search, a single-step execution, and a reproduction function to help programmers debug a mobile agent system. Results from our experiments on debugging test applications show that our framework is helpful to support programmers and help them to debug. This reduces the number of keystrokes by 41% and number of clicks by 24%.

Paper Nr: 134
Title:

Towards an Explicit Bidirectional Requirement-to-Code Traceability Meta-model for the PASSI Methodology

Authors:

Mihoub Mazouz

Abstract: Traceability plays an important role in the development of computing systems, specifically, the complex ones. It provides several benefits to stakeholders and developers during the different phases of the systems development life cycle, including verification & validation and maintenance. Unfortunately, there are very few works in literature addressing the concept of traceability in multi-agent systems development methodologies. Having an incremental and iterative process, the well-known PASSI (Process for Agent Societies Specification and Implementation) methodology needs an explicit traceability in order to facilitate the understanding of the MAS under development and to better manage the changes occurring during the development process. In addition, it can lead to a requirement-based verification & validation. In this paper, we propose a new traceability meta-model for the PASSI methodology by introducing explicit traceability links of functional requirements through the various phases of the development life cycle.

Paper Nr: 141
Title:

An Adaptive Multi-Agent System for Ontology Co-evolution

Authors:

Souad Benomrane and Zied Sellami

Abstract: A dynamic ontology evolution reflects the ontology adaptation, to a set of changes and their propagation to the other dependent components, to ensure its consistency. This process needs a frequent involvment of the user (ontologist), which is a complex and time consuming task. As a solution, in this paper we present an extension of an ontology evolution tool called DYNAMO MAS based on an adaptive multi-agent system (AMAS). We improve agents by adding new behaviour to adapt to ontologist actions in order to improve the proposals already made and to propose others.

Doctoral Consortium

DCAART 2015

Full Papers
Paper Nr: 3
Title:

Computing with Perceptions for the Linguistic Description of Complex Phenomena through the Analysis of Time Series Data

Authors:

A. Ramos-Soto

Abstract: We are living in a world which is increasingly flooded with vast amounts of data. As a consequence, the use of techniques allowing to exploit and explain the information contained in this raw data has become mandatory. In this context, more human-friendly alternatives to standard techniques like statistics or data mining approaches are being considered. Among them, the soft computing field provides a set of tools allowing the creation of linguistic descriptions of data. These are automatically generated textual explanations that comprise the most relevant information that is implicit in the data, providing linguistic concepts which deal with the imprecision and ambiguity of language through the use of fuzzy sets. Following this research line, the Ph.D. we propose explores the potential of this field by providing real solutions employing linguistic descriptions and also extending the current theoretical base to consider a higher expressiveness.

Paper Nr: 5
Title:

Disentangling Cognitive and Constructivist Aspects of Hierarchies

Authors:

Stefano Bennati

Abstract: One of the most puzzling problems in the social sciences is the emergence of social institutions. The field of sociology is trying to understand why our society is the way we know it and whether an alternative, possibly better, society would be possible. One of the fundamental questions is the emergence of hierarchies. The cognitive approach suggests that hierarchies are encoded in human nature, therefore are the most natural form of organization; on the other hand the costructivist approach sees hierarchies as a product of interactions between individuals that emerges independently of individual preferences. We will investigate under which conditions hierarchies emerge from a cognitive factor, a constructivist factor or a combination of both. We will study this question both at the analytic level, with the help of Agent-Based simulations where agents are Neural Networks, and at the empirical level by running sociological experiments in our laboratory.

Paper Nr: 6
Title:

A New Approach for the Detection of Emergent Behaviors and Implied Scenarios in Distributed Software Systems - Extracting Communications from Scenarios

Authors:

Fatemeh Hendijani Fard and Behrouz H. Far

Abstract: An approach to specify the requirements and design of a Distributed Software System (DSS), which is mostly used in recent years, is describing scenarios with visual artifacts, such as, UML Sequence Diagrams and ITU-T Message Sequence Charts (MSC) and High level Message Sequence Charts (hMSC). Scenarios describe system’s behavior and define the components and their interactions. Each scenario determines a partial behavior of the system. Hence, the restricted view of the components in each scenario and distributed functionality and/or control in DSS, may result in inconsistency in the system behavior. One problem that arise in scenario based Distributed Software Systems is emergent behaviors or implied scenarios that occur because of restricted view of one or more components. Emergent behaviors are known as unexpected behaviors that components show in their execution time. However, this behavior was not defined in their designs. This unexpected behavior may imply a new scenario to the system, and can result in considerable cost and damage. Therefore, emergent behaviors should be detected in the early phases of software development to prevent damage or cost after deployment. The detected emergent behaviors can be either accepted or denied by the stakeholders. However, they should be detected and discussed, to be added as new designs, or to be specified as negative scenarios that should be avoided. In our research, we try to devise an automatic methodology to detect the emergent behaviors (implied scenarios) from the designs of the system. We also mean to help the designers for the exact point of the problem in the system and the possible solutions to remove the detected emergent behaviors.

Paper Nr: 8
Title:

Alternative Approaches to Planning

Authors:

Otakar Trunda

Abstract: In my PhD. dissertation, I focus on action planning and constrained discrete optimization. I try to introduce novel approaches to the field of single-agent planning by combining standard techniques with meta-heuristic optimization, machine-learning algorithms, hyper-euristics and algorithm selection approaches. Our main goal is to create new and flexible planning algorithms which would be suited for a large variety of real-life problems. Planning is a fundamental and difficult problem in AI and any new results in this area are directly applicable to many other fields. They can be used for single-agent or multi-agent action selection in both competitive or cooperative environment and as we focus on optimization, our techniques are suitable for real-life problems that arise in robotics or transportation.

Paper Nr: 9
Title:

Automatic Generation of Learning Path

Authors:

Claudia Perez-Martinez and Gabriel Lopez Morteo

Abstract: This paper presents a proposal to automatically generate a learning path. The proposal method apply Natural Language Processing techniques, it uses as knowledge source an ontological view from Wikipedia, taking advantage of its broad domain of concepts. The results has been validated comparing them with the teaching opinion. It is expected that the learning path built can be an useful input to instructional design processes considering them before to know the student profile.

Paper Nr: 10
Title:

Measuring Intrinsic Quality of Human Decisions

Authors:

Tamal T. Biswas

Abstract: Research on judging decisions made by fallible (human) agents is not as much advanced as research on finding optimal decisions, and on supervision of AI agents´ decisions by humans. Human decisions are often influenced by various factors, such as risk, uncertainty, time pressure, and depth of cognitive capability, whereas decisions by an AI agent can be effectively optimal without these limitations. The concept of `depth´, a well-defined term in game theory (including chess), does not have a clear formulation in decision theory. To quantify ´depth´ in decision theory, we can configure an AI agent of supreme competence to `think´ at depths beyond the capability of any human, and in the process collect evaluations of decisions at various depths. One research goal is to create an intrinsic measure of the depth of thinking required to answer certain test questions, toward a reliable means of assessing their difficulty apart from item-response statistics. We relate the depth of cognition by humans to depths of search, and use this information to infer the quality of decisions made, so as to judge the decision-maker from his decisions. Our research extends the model of Regan and Haworth to quantify depth, plus related measures of complexity and difficulty, in the context of chess. We use large data from real chess tournaments and evaluations from chess programs (AI agents) of strength beyond all human players. We then seek to transfer the results to other decision-making fields in which effectively optimal judgements can be obtained from either hindsight, answer banks, or powerful AI agents. In some applications, such as multiple-choice tests, we establish an isomorphism of the underlying mathematical quantities, which induces a correspondence between various measurement theories and the chess model. We provide results toward the objective of applying the correspondence in reverse to obtain and quantify measure of depth and difficulty for multiple-choice tests, stock market trading, and other real-world applications.

Special Session

PUaNLP 2015

Paper Nr: 2
Title:

What Did You Mean? - Facing the Challenges of User-generated Software Requirements

Authors:

Michaela Geierhos

Abstract: Existing approaches towards service composition demand requirements of the customers in terms of service templates, service query profiles, or partial process models. However, addressed non-expert customers may be unable to fill-in the slots of service templates as requested or to describe, for example, pre- and postconditions, or even have difficulties in formalizing their requirements. Thus, our idea is to provide nonexperts with suggestions how to complete or clarify their requirement descriptions written in natural language. Two main issues have to be tackled: (1) partial or full inability (incapacity) of non-experts to specify their requirements correctly in formal and precise ways, and (2) problems in text analysis due to fuzziness in natural language. We present ideas how to face these challenges by means of requirement disambiguation and completion. Therefore, we conduct ontology-based requirement extraction and similarity retrieval based on requirement descriptions that are gathered from App marketplaces. The innovative aspect of our work is that we support users without expert knowledge in writing their requirements by simultaneously resolving ambiguity, vagueness, and underspecification in natural language.

Paper Nr: 3
Title:

Building TALAA, a Free General and Categorized Arabic Corpus

Authors:

Essma Selab and Ahmed Guessoum

Abstract: Arabic natural language processing (ANLP) has gained increasing interest over the last decade. However, the development of ANLP tools depends on the availability of large corpora. It turns out unfortunately that the scientific community has a deficit in large and varied Arabic corpora, especially ones that are freely accessible. With the Internet continuing its exponential growth, Arabic Internet content has also been following the trend, yielding large amounts of textual data available through different Arabic websites. This paper describes the TALAA corpus, a voluminous general Arabic corpus, built from daily Arabic newspaper websites. The corpus is a collection of more than 14 million words with 15,891,729 tokens contained in 57,827 different articles. A part of the TALAA corpus has been tagged to construct an annotated Arabic corpus of about 7000 tokens, the POS-tagger used containing a set of 58 detailed tags. The annotated corpus was manually checked by two human experts. The methodology used to construct TALAA is presented and various metrics are applied to it, showing the usefulness of the corpus. The corpus can be made available to the scientific community upon authorisation.

Paper Nr: 4
Title:

Completing Mixed Language Grammars Through Womb Grammars Plus Ontologies

Authors:

Ife Adebara

Abstract: Womb Grammars are a recently introduced constraint-based methodology for acquiring linguistic information on a given language from that of another, implemented in CHRG (Constraint Handling Rule Grammars). This is a position paper that discusses their possible adaptation to multilingual text parsing. In particular, we propose to detect unspecified information with appropriate ontologies. Our proposed methodology exploits the descriptive power of constraints both for defining sentence acceptability and for inferring lexical knowledge from a word’s sentential context, even when foreign.

Paper Nr: 5
Title:

Underspecified Relations with a Formal Language of Situation Theory

Authors:

Roussanka Loukanova

Abstract: The paper is an introduction to a formal language of Situation Theory. The language provides algorithmic processing of situated information. We introduce specialized, restricted variables that are recursively constrained to satisfy type-theoretic conditions by restrictions and algorithmic assignments. The restricted variables designate recursively connected networks of memory locations for ‘saving’ parametric information that depends on situations and restrictions over objects. The formal definitions introduce richly informative typed language for classification and representation of underspecified, parametric, and partial information that is dependent on situations.