Search Results

You are looking at 1 - 10 of 275 items :

  • "neural architecture" x
Clear All

Advanced Nonlinear Studies 4 (2004), 549–562 Neural Architecture and Locomotion R.E.L. Turner∗ Department of Mathematics University of Wisconsin, Madison, Wisconsin 53706 e-mail: turner@math.wisc.edu Received 23 September 2004 Abstract Ascaris suum is a parasitic nematode that lives in pigs’ intestines. It is a tempting subject for neurophysiologists in that it is ’simple’, having only 300 neurons, about 80 of which are associated with locomotion. The muscular and neural structures are quite well understood, but the means by which they produce locomotion are not

Neural Architectures of Compositionality Frank van der Velde Compositionality is a key feature of human cognition. Thus, it has to be a key feature of the human brain as well. However, it has been notoriously hard to show how compositionality can be implemented in neural structures (both nat- ural and artificial). In this article, I will show how compositionality can be implemented in neural structures. In particular, I will discuss a neural ‘black- board’ architecture of compositional sentence structure. The neural blackboard architecture solves the four

© Freund & Pettman, U.K. Reviews in the Neurosciences, 14, 121-143 (2003) Neural Architectures for Robot Intelligence Η. Ritter, J.J. Steil, C. Nölker, F. Röthling and P. McGuire Neuroinformatics Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany S Y N O P S I S We argue that direct experimental approaches to elucidate the architecture of higher brains may benefit from insights gained from exploring the possibilities and limits of artificial control architectures for robot systems. We present some of our recent work that has been

Shruti Kaushik, Abhinav Choudhury, Nataraj Dasgupta, Sayee Natarajan, Larry A. Pickett, and Varun Dutt 10 Evaluating single- and multi-headed neural architectures for time-series forecasting of healthcare expenditures Abstract: Artificial neural networks (ANNs) are increasingly being used in the health- care domain for time-series predictions. However, for multivariate time-series pre- dictions in the healthcare domain, the use of multi-headed neural network architec- tures has been less explored in the literature. Multi-headed architectures work on the idea that

Abstract

This paper presents the Computing Networks (CNs) framework. CNs are used to generalize neural and swarm architectures. Artificial neural networks, ant colony optimization, particle swarm optimization, and realistic biological models are used as examples of instantiations of CNs. The description of these architectures as CNs allows their comparison. Their differences and similarities allow the identification of properties that enable neural and swarm architectures to perform complex computations and exhibit complex cognitive abilities. In this context, the most relevant characteristics of CNs are the existence multiple dynamical and functional scales. The relationship between multiple dynamical and functional scales with adaptation, cognition (of brains and swarms) and computation is discussed.

involve- ment of deep neural architectures as an impressive tool for solving multiple image processing problems. This chapter describes the superior performance obtained by the application of deep learning techniques to the task of image processing. Keywords: Deep learning, Image processing, Neural architectures, Classification, Im- age resolution, Image sharpening 3.1 Introduction The world has become an automated one as we move closer towards white collar au- tomation faster than ever before. One critical insight that only people can provide are visual abilities that

) in real-time to associate the keys with PUF images J mn acquired in different experiments under the same input conditions of I mn and subjected to experimental fluctuations of input parameters. PUF, physical unclonable function. To address this problem on a general ground, we use a deep neural network (DNN) architecture ( Figure 2a, b ), which we train to learn the mapping function M $\mathcal{M}$ satisfying constraints i)–iii). The DNN used in this work is a 2-layer feedforward neural architecture with a rectified linear unit neural activation function [ 41

central premises of Darwinian evolutionary biology, genetic variation would be false. Concepts and processes borrowed from linguistics such as “modularity” have impeded our understanding of brain-behavior relations. Some aspects of be- havior are regulated in specific localized “modules” in the brain, but current research demonstrates that the neural architecture regulating human language is also implicated in motor control, cognition, and other aspects of behavior. The neural bases of enhanced human language are not separable from cogni- tion and motor ability. The

Abstract

Due to the advances made in recent years, methods based on deep neural networks have been able to achieve a state-of-the-art performance in various computer vision problems. In some tasks, such as image recognition, neural-based approaches have even been able to surpass human performance. However, the benchmarks on which neural networks achieve these impressive results usually consist of fairly high quality data. On the other hand, in practical applications we are often faced with images of low quality, affected by factors such as low resolution, presence of noise or a small dynamic range. It is unclear how resilient deep neural networks are to the presence of such factors. In this paper we experimentally evaluate the impact of low resolution on the classification accuracy of several notable neural architectures of recent years. Furthermore, we examine the possibility of improving neural networks’ performance in the task of low resolution image recognition by applying super-resolution prior to classification. The results of our experiments indicate that contemporary neural architectures remain significantly affected by low image resolution. By applying super-resolution prior to classification we were able to alleviate this issue to a large extent as long as the resolution of the images did not decrease too severely. However, in the case of very low resolution images the classification accuracy remained considerably affected.

Abstract

Addressing Answer Selection (AS) tasks with complex neural networks typically requires a large amount of annotated data to increase the accuracy of the models. In this work, we are interested in simple models that can potentially give good performance on datasets with no or few annotations. First, we propose new unsupervised baselines that leverage distributed word and sentence representations. Second, we compare the ability of our neural architectures to learn from few annotated examples in a weakly supervised scheme and we demonstrate how these methods can benefit from a pre-training on an external dataset. With an emphasis on results reproducibility, we show that our simple methods can reach or approach state-of-the-art performances on four common AS datasets.