Loading

Error: Cannot Load Popup Box

Hit List

Title:

Kernel CMAC: an Efficient Neural Network for Classification and Regression

Description:

Kernel methods in learning machines have been developed in the last decade asnew techniques for solving classification and regression problems. Kernel methods havemany advantageous properties regarding their learning and generalization capabilities,but for getting the solution usually the computationally complex quadratic programming isrequired....

Kernel methods in learning machines have been developed in the last decade asnew techniques for solving classification and regression problems. Kernel methods havemany advantageous properties regarding their learning and generalization capabilities,but for getting the solution usually the computationally complex quadratic programming isrequired. To reduce computational complexity a lot of different versions have beendeveloped. These versions apply different kernel functions, utilize the training data indifferent ways or apply different criterion functions. This paper deals with a special kernelnetwork, which is based on the CMAC neural network. Cerebellar Model ArticulationController (CMAC) has some attractive features: fast learning capability and thepossibility of efficient digital hardware implementation. Besides these attractive featuresthe modelling and generalization capabilities of a CMAC may be rather limited. The papershows that kernel CMAC – an extended version of the classical CMAC networkimplemented in a kernel form – improves that properties of the classical versionsignificantly. Both the modelling and the generalization capabilities are improved while thelimited computational complexity is maintained. The paper shows the architecture of thisnetwork and presents the relation between the classical CMAC and the kernel networks.The operation of the proposed architecture is illustrated using some common benchmarkproblems. Minimize

Publisher:

Óbuda University

Year of Publication:

2006-01-01T00:00:00Z

Document Type:

article

Language:

English

Subjects:

kernel networks ; input-output system modelling ; neural networks ; CMAC ; generalization error ; LCC:Technology (General) ; LCC:T1-995 ; LCC:Technology ; LCC:T ; DOAJ:Technology (General) ; DOAJ:Technology and Engineering ; LCC:Technology (General) ; LCC:T1-995 ; LCC:Technology ; LCC:T ; DOAJ:Technology (General) ; DOAJ:Technology and Engineeri...

kernel networks ; input-output system modelling ; neural networks ; CMAC ; generalization error ; LCC:Technology (General) ; LCC:T1-995 ; LCC:Technology ; LCC:T ; DOAJ:Technology (General) ; DOAJ:Technology and Engineering ; LCC:Technology (General) ; LCC:T1-995 ; LCC:Technology ; LCC:T ; DOAJ:Technology (General) ; DOAJ:Technology and Engineering ; LCC:Technology (General) ; LCC:T1-995 ; LCC:Technology ; LCC:T ; LCC:Technology (General) ; LCC:T1-995 ; LCC:Technology ; LCC:T Minimize

DDC:

600 Technology *(computed)*

Relations:

http://uni-obuda.hu/journal/HorvathGabor_5.pdf

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Lecture Notes for College Discrete Mathematics

Publisher:

Unpublished

Year of Publication:

2013

Document Type:

Text ; Book

My Lists:

My Tags:

Notes:

Title:

A Canonical Representation of Order 3 Phase Type Distributions

Description:

The characterization and the canonical representation of order n phase type distributions (PH(n)) is an open research problem. This problem is solved for n = 2, since the equivalence of the acyclic and the general PH distributions has been proven for a long time. However, no canonical representations have been introduced for the general PH distr...

The characterization and the canonical representation of order n phase type distributions (PH(n)) is an open research problem. This problem is solved for n = 2, since the equivalence of the acyclic and the general PH distributions has been proven for a long time. However, no canonical representations have been introduced for the general PH distribution class so far for n> 2. In this paper we summarize the related results for n = 3. Starting from these results we recommend a canonical representation of the PH(3) class and present a transformation procedure to obtain the canonical representation based on any (not only Markovian) vector-matrix representation of the distribution. Using this canonical transformation method we evaluate the moment bounds of the PH(3) distribution set and present the results of our numerical investigations. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-01-28

Source:

http://webspn.hit.bme.hu/~telek/cikkek/horv07b.pdf

http://webspn.hit.bme.hu/~telek/cikkek/horv07b.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Phase Type Distribution ; Canonical Form

Phase Type Distribution ; Canonical Form Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

A Fast Matrix-Analytic Approximation for the Two Class GI/G/1 Non-Preemptive Priority Queue

Description:

In this paper we present the approximate waiting time analysis of two class non-preemptive priority queues. The tra#cs of the queue are characterized by "two parameter description", which means that the mean and the squared coe#cient of variation of the inter arrival times and of the service times are given. The solution is based on the separate...

In this paper we present the approximate waiting time analysis of two class non-preemptive priority queues. The tra#cs of the queue are characterized by "two parameter description", which means that the mean and the squared coe#cient of variation of the inter arrival times and of the service times are given. The solution is based on the separate analysis of the low and high priority queue. The resulting single class queues have a homogeneous QBD (quasi birth death) structure, therefore their analysis is numerically e#cient. We check the performance of the approximation extensively, and conclude that it gives good accuracy in a wide range of tra#c parameters. 1 Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-04-19

Source:

http://www.comp.glam.ac.uk/ASMTA2005/Proc/asmta2005%20pdf/52-ASMTA05-29.pdf

http://www.comp.glam.ac.uk/ASMTA2005/Proc/asmta2005%20pdf/52-ASMTA05-29.pdf Minimize

Document Type:

text

Language:

en

Subjects:

queueing systems ; performance modeling ; priority queue

queueing systems ; performance modeling ; priority queue Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Kernel CMAC: an Efficient Neural Network for Classification and Regression

Description:

Abstract: Kernel methods in learning machines have been developed in the last decade as new techniques for solving classification and regression tasks. Kernel methods have many advantages properties regarding their learning and generalization capabilities, but for getting the solution usually the computationally complex quadratic programming is ...

Abstract: Kernel methods in learning machines have been developed in the last decade as new techniques for solving classification and regression tasks. Kernel methods have many advantages properties regarding their learning and generalization capabilities, but for getting the solution usually the computationally complex quadratic programming is required. To reduce computational complexity a lot of different versions have been developed. These different versions apply different kernel functions, utilize the training data in different ways or apply different criterion functions. This paper deals with a special kernel network, which is based on the CMAC neural network. Cerebellar Model Articulation Controller (CMAC) has some attractive features: fast learning capability and the possibility of efficient digital hardware implementation. Besides these attractive features the modelling and generalization capabilities of a CMAC are rather limited. The paper shows that kernel CMAC – an extended version of the classical CMAC network implemented in a kernel form – combines the advantages of both approaches. Its modelling and generalization capabilities are improved while the limited computational complexity is maintained. The paper shows the architecture of this network and presents the relation between the classical CMAC and the kernel networks. The operation of the proposed architecture is illustrated using some common benchmark problems. Keywords: kernel networks, input-output system modelling, neural networks, CMAC, generalization error Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-02-02

Source:

http://www.bmf.hu/conferences/mtn2005/horvathgabor.pdf

http://www.bmf.hu/conferences/mtn2005/horvathgabor.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Kernel machines like Support Vector Machines (SVMs) [1 ; Least Squares SVMs

Kernel machines like Support Vector Machines (SVMs) [1 ; Least Squares SVMs Minimize

DDC:

006 Special computer methods *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Approximate Waiting Time Analysis of Priority Queues

Description:

Non-preemptive priority queues appear in a lot of computer and communication systems. Their precise mathematical analysis is a di#cult and complex task, but sometimes it is su#cient to provide an approximation for performance measures.

Non-preemptive priority queues appear in a lot of computer and communication systems. Their precise mathematical analysis is a di#cult and complex task, but sometimes it is su#cient to provide an approximation for performance measures. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-04-16

Source:

http://www.informatik.unibw-muenchen.de/PMCCS5/papers/horvath.pdf

http://www.informatik.unibw-muenchen.de/PMCCS5/papers/horvath.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

The Refinement of Microcalcification Cluster Assessment By Joint Analysis of MLO and CC views

Description:

Abstract. Most of the CAD Systems for Mammograms are composed of algorithms analysing the four X-ray images individually. It is a general experience, that algorithms in search of microcalcification clusters can obtain high sensitivity only if specificity is low. To overcome efficiency problem this paper proposes a simple algorithm to combine inf...

Abstract. Most of the CAD Systems for Mammograms are composed of algorithms analysing the four X-ray images individually. It is a general experience, that algorithms in search of microcalcification clusters can obtain high sensitivity only if specificity is low. To overcome efficiency problem this paper proposes a simple algorithm to combine information of the two views (MLO/CC) of the breast. The procedure is based upon the experiences of radiologists: masses and calcifications should emerge on both views, so if no matching is found, the given object is a false positive hit. A positioning system is evolved to find corresponding regions on the two images. Calcification clusters obtained in individual images are matched in “2.5-D ” provided by the positioning system. The credibility value of the hit is reassessed by the matching. The proposed approach can significantly reduce the number of false positive hits in calcification. 1 Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-01

Source:

http://www.mit.bme.hu/~horvath/papers/IWDM_Joint.pdf

http://www.mit.bme.hu/~horvath/papers/IWDM_Joint.pdf Minimize

Document Type:

text

Language:

en

DDC:

006 Special computer methods *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Extended Least Squares LS–SVM

Description:

Abstract—Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they eliminate certain crucial questions involved by neural network construction. The main drawback of standard SVM is its high computational complexity, therefore recently a new technique, the Least Squares SVM (LS–SVM) ha...

Abstract—Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they eliminate certain crucial questions involved by neural network construction. The main drawback of standard SVM is its high computational complexity, therefore recently a new technique, the Least Squares SVM (LS–SVM) has been introduced. In this paper we present an extended view of the Least Squares Support Vector Regression (LS–SVR), which enables us to develop new formulations and algorithms to this regression technique. Based on manipulating the linear equation set-which embodies all information about the regression in the learning process- some new methods are introduced to simplify the formulations, speed up the calculations and/or provide better results. Keywords—Function estimation, Least–Squares Support Vector Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2011-04-15

Source:

http://www.waset.org/journals/waset/v36/v36-60.pdf

http://www.waset.org/journals/waset/v36/v36-60.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Machines ; Regression ; System Modeling T

Machines ; Regression ; System Modeling T Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

A Robust LS-SVM Regression

Description:

Abstract—In comparison to the original SVM, which involves a quadratic programming task; LS–SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. Another problem is that LS-SVM is only optimal if the training samples are corrupted by Gaussian noise. In Least Squares SVM (LS–SVM), the nonlinear solutio...

Abstract—In comparison to the original SVM, which involves a quadratic programming task; LS–SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. Another problem is that LS-SVM is only optimal if the training samples are corrupted by Gaussian noise. In Least Squares SVM (LS–SVM), the nonlinear solution is obtained, by first mapping the input vector to a high dimensional kernel space in a nonlinear fashion, where the solution is calculated from a linear equation set. In this paper a geometric view of the kernel space is introduced, which enables us to develop a new formulation to achieve a sparse and robust estimate. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2011-07-19

Source:

http://www.waset.org/journals/waset/v7/v7-28.pdf

http://www.waset.org/journals/waset/v7/v7-28.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Least Squares Support Vector Machines ; Regression ; Sparse approximation

Least Squares Support Vector Machines ; Regression ; Sparse approximation Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

A Sparse Robust Model for a Linz-Donawitz Steel Converter

Description:

Abstract – Steelmaking with a Linz-Donawitz converter is a typical example of a complex industrial process where due to the lack of exact mathematical (physical-chemical) models a construction of a black-box behavioral model is required, based on noisy and imprecise data. To construct a good model, a large number of such input-output samples sho...

Abstract – Steelmaking with a Linz-Donawitz converter is a typical example of a complex industrial process where due to the lack of exact mathematical (physical-chemical) models a construction of a black-box behavioral model is required, based on noisy and imprecise data. To construct a good model, a large number of such input-output samples should be used, which calls for a method that is sparse, in a sense that the resulting model complexity is independent of the sample number; and robust to reduce the effects of noise. Lately kernel based methods, like SVMs have been successfully applied to a number of such problems. The main problem with the traditional SVM is its high algorithmic complexity which makes it infeasible for really large databases. LS-SVM solves this problem, but the resulting model is not sparse. Our solution uses a sparse and robust extension of LS-SVM, which leads to good results compared to other methods (such as MLPs) applied to the same problem. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-08-17

Source:

http://mycite.omikk.bme.hu/doc/15943.pdf

http://mycite.omikk.bme.hu/doc/15943.pdf Minimize

Document Type:

text

Language:

en

Subjects:

LS-SVM ; LS 2-SVM ; data preprocessing

LS-SVM ; LS 2-SVM ; data preprocessing Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Currently in BASE: 68,072,316 Documents of 3,307 Content Sources

http://www.base-search.net