Artificial Intelligence Preprint | 2019-07-16

in #artificial5 years ago

Artificial Intelligence


Automated Playtesting of Matching Tile Games (1907.06570v1)

Luvneesh Mugrai, Fernando de Mesentier Silva, Christoffer Holmgård, Julian Togelius

2019-07-15

Matching tile games are an extremely popular game genre. Arguably the most popular iteration, Match-3 games, are simple to understand puzzle games, making them great benchmarks for research. In this paper, we propose developing different procedural personas for Match-3 games in order to approximate different human playstyles to create an automated playtesting system. The procedural personas are realized through evolving the utility function for the Monte Carlo Tree Search agent. We compare the performance and results of the evolution agents with the standard Vanilla Monte Carlo Tree Search implementation as well as to a random move-selection agent. We then observe the impacts on both the game's design and the game design process. Lastly, a user study is performed to compare the agents to human play traces.

The Many AI Challenges of Hearthstone (1907.06562v1)

Amy K. Hoover, Julian Togelius, Scott Lee, Fernando de Mesentier Silva

2019-07-15

Games have benchmarked AI methods since the inception of the field, with classic board games such as Chess and Go recently leaving room for video games with related yet different sets of challenges. The set of AI problems associated with video games has in recent decades expanded from simply playing games to win, to playing games in particular styles, generating game content, modeling players etc. Different games pose very different challenges for AI systems, and several different AI challenges can typically be posed by the same game. In this article we analyze the popular collectible card game Hearthstone (Blizzard 2014) and describe a varied set of interesting AI challenges posed by this game. Collectible card games are relatively understudied in the AI community, despite their popularity and the interesting challenges they pose. Analyzing a single game in-depth in the manner we do here allows us to see the entire field of AI and Games through the lens of a single game, discovering a few new variations on existing research topics.

A semi-holographic hyperdimensional representation system for hardware-friendly cognitive computing (1907.05688v2)

A. Serb, I. Kobyzev, J. Wang, T. Prodromakis

2019-07-12

One of the main, long-term objectives of artificial intelligence is the creation of thinking machines. To that end, substantial effort has been placed into designing cognitive systems; i.e. systems that can manipulate semantic-level information. A substantial part of that effort is oriented towards designing the mathematical machinery underlying cognition in a way that is very efficiently implementable in hardware. In this work we propose a 'semi-holographic' representation system that can be implemented in hardware using only multiplexing and addition operations, thus avoiding the need for expensive multiplication. The resulting architecture can be readily constructed by recycling standard microprocessor elements and is capable of performing two key mathematical operations frequently used in cognition, superposition and binding, within a budget of below 6 pJ for 64- bit operands. Our proposed 'cognitive processing unit' (CoPU) is intended as just one (albeit crucial) part of much larger cognitive systems where artificial neural networks of all kinds and associative memories work in concord to give rise to intelligence.

Asking Clarifying Questions in Open-Domain Information-Seeking Conversations (1907.06554v1)

Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, W. Bruce Croft

2019-07-15

Users often fail to formulate their complex information needs in a single query. As a consequence, they may need to scan multiple result pages or reformulate their queries, which may be a frustrating experience. Alternatively, systems can improve user satisfaction by proactively asking questions of the users to clarify their information needs. Asking clarifying questions is especially important in conversational systems since they can only return a limited number of (often only one) result(s). In this paper, we formulate the task of asking clarifying questions in open-domain information-seeking conversational systems. To this end, we propose an offline evaluation methodology for the task and collect a dataset, called Qulac, through crowdsourcing. Our dataset is built on top of the TREC Web Track 2009-2012 data and consists of over 10K question-answer pairs for 198 TREC topics with 762 facets. Our experiments on an oracle model demonstrate that asking only one good question leads to over 170% retrieval performance improvement in terms of P@1, which clearly demonstrates the potential impact of the task. We further propose a retrieval framework consisting of three components: question retrieval, question selection, and document retrieval. In particular, our question selection model takes into account the original query and previous question-answer interactions while selecting the next question. Our model significantly outperforms competitive baselines. To foster research in this area, we have made Qulac publicly available.

Proximal Policy Optimization with Mixed Distributed Training (1907.06479v1)

Zhenyu Zhang, Xiangfeng Luo, Shaorong Xie, Jianshu Wang, Wei Wang, Yang Li

2019-07-15

Instability and slowness are two main problems in deep reinforcement learning. Even if proximal policy optimization is the state of the art, it still suffers from these two problems. We introduce an improved algorithm based on proximal policy optimization (PPO), mixed distributed proximal policy optimization (MDPPO), and show that it can accelerate and stabilize the training process. In our algorithm, multiple different policies train simultaneously and each of them controls several identical agents that interact with environments. Actions are sampled by each policy separately as usual but the trajectories for training process are collected from all agents, instead of only one policy. We find that if we choose some auxiliary trajectories elaborately to train policies, the algorithm will be more stable and quicker to converge especially in the environments with sparse rewards.

A Neural Turing~Machine for Conditional Transition Graph Modeling (1907.06432v1)

Mehdi Ben Lazreg, Morten Goodwin, Ole-Christoffer Granmo

2019-07-15

Graphs are an essential part of many machine learning problems such as analysis of parse trees, social networks, knowledge graphs, transportation systems, and molecular structures. Applying machine learning in these areas typically involves learning the graph structure and the relationship between the nodes of the graph. However, learning the graph structure is often complex, particularly when the graph is cyclic, and the transitions from one node to another are conditioned such as graphs used to represent a finite state machine. To solve this problem, we propose to extend the memory based Neural Turing Machine (NTM) with two novel additions. We allow for transitions between nodes to be influenced by information received from external environments, and we let the NTM learn the context of those transitions. We refer to this extension as the Conditional Neural Turing Machine (CNTM). We show that the CNTM can infer conditional transition graphs by empirically verifiying the model on two data sets: a large set of randomly generated graphs, and a graph modeling the information retrieval process during certain crisis situations. The results show that the CNTM is able to reproduce the paths inside the graph with accuracy ranging from 82,12% for 10 nodes graphs to 65,25% for 100 nodes graphs.

Feature Learning for Fault Detection in High-Dimensional Condition-Monitoring Signals (1810.05550v2)

Gabriel Michau, Yang Hu, Thomas Palmé, Olga Fink

2018-10-12

Complex industrial systems are continuously monitored by a large number of heterogeneous sensors. The diversity of their operating conditions and the possible fault types make it impossible to collect enough data for learning all the possible fault patterns. The paper proposes an integrated automatic unsupervised feature learning and one-class classification for fault detection that uses data on healthy conditions only for its training. The approach is based on stacked Extreme Learning Machines (namely Hierarchical, or HELM) and comprises an autoencoder, performing unsupervised feature learning, stacked with a one-class classifier monitoring the distance of the test data to the training healthy class, thereby assessing the health of the system. This study provides a comprehensive evaluation of HELM fault detection capability compared to other machine learning approaches, such as stand-alone one-class classifiers (ELM and SVM), these same one-class classifiers combined with traditional dimensionality reduction methods (PCA) and a Deep Belief Network. The performance is first evaluated on a synthetic dataset that encompasses typical characteristics of condition monitoring data. Subsequently, the approach is evaluated on a real case study of a power plant fault. The proposed algorithm for fault detection, combining feature learning with the one-class classifier, demonstrates a better performance, particularly in cases where condition monitoring data contain several non-informative signals.

Learning by Abstraction: The Neural State Machine (1907.03950v3)

Drew A. Hudson, Christopher D. Manning

2019-07-09

We introduce the Neural State Machine, seeking to bridge the gap between the neural and symbolic views of AI and integrate their complementary strengths for the task of visual reasoning. Given an image, we first predict a probabilistic graph that represents its underlying semantics and serves as a structured world model. Then, we perform sequential reasoning over the graph, iteratively traversing its nodes to answer a given question or draw a new inference. In contrast to most neural architectures that are designed to closely interact with the raw sensory data, our model operates instead in an abstract latent space, by transforming both the visual and linguistic modalities into semantic concept-based representations, thereby achieving enhanced transparency and modularity. We evaluate our model on VQA-CP and GQA, two recent VQA datasets that involve compositionality, multi-step inference and diverse reasoning skills, achieving state-of-the-art results in both cases. We provide further experiments that illustrate the model's strong generalization capacity across multiple dimensions, including novel compositions of concepts, changes in the answer distribution, and unseen linguistic structures, demonstrating the qualities and efficacy of our approach.

Comprehensive Process Drift Detection with Visual Analytics (1907.06386v1)

Anton Yeshchenko, Claudio Di Ciccio, Jan Mendling, Artem Polyvyanyy

2019-07-15

Recent research has introduced ideas from concept drift into process mining to enable the analysis of changes in business processes over time. This stream of research, however, has not yet addressed the challenges of drift categorization, drilling-down, and quantification. In this paper, we propose a novel technique for managing process drifts, called Visual Drift Detection (VDD), which fulfills these requirements. The technique starts by clustering declarative process constraints discovered from recorded logs of executed business processes based on their similarity and then applies change point detection on the identified clusters to detect drifts. VDD complements these features with detailed visualizations and explanations of drifts. Our evaluation, both on synthetic and real-world logs, demonstrates all the aforementioned capabilities of the technique.

Decentralized learning with budgeted network load using Gaussian copulas and classifier ensembles (1804.10028v3)

John Klein, Mahmoud Albardan, Benjamin Guedj, Olivier Colot

2018-04-26

We examine a network of learners which address the same classification task but must learn from different data sets. The learners cannot share data but instead share their models. Models are shared only one time so as to preserve the network load. We introduce DELCO (standing for Decentralized Ensemble Learning with COpulas), a new approach allowing to aggregate the predictions of the classifiers trained by each learner. The proposed method aggregates the base classifiers using a probabilistic model relying on Gaussian copulas. Experiments on logistic regressor ensembles demonstrate competing accuracy and increased robustness in case of dependent classifiers. A companion python implementation can be downloaded at https://github.com/john-klein/DELCO



Coin Marketplace

STEEM 0.15
TRX 0.16
JST 0.028
BTC 67844.42
ETH 2429.36
USDT 1.00
SBD 2.35