Loading

Error: Cannot Load Popup Box

Hit List

Title:

Lecture Notes for College Discrete Mathematics

Publisher:

Unpublished

Year of Publication:

2013

Document Type:

Text ; Book

My Lists:

My Tags:

Notes:

Title:

A Sparse Robust Model for a Linz-Donawitz Steel Converter

Description:

Abstract – Steelmaking with a Linz-Donawitz converter is a typical example of a complex industrial process where due to the lack of exact mathematical (physical-chemical) models a construction of a black-box behavioral model is required, based on noisy and imprecise data. To construct a good model, a large number of such input-output samples sho...

Abstract – Steelmaking with a Linz-Donawitz converter is a typical example of a complex industrial process where due to the lack of exact mathematical (physical-chemical) models a construction of a black-box behavioral model is required, based on noisy and imprecise data. To construct a good model, a large number of such input-output samples should be used, which calls for a method that is sparse, in a sense that the resulting model complexity is independent of the sample number; and robust to reduce the effects of noise. Lately kernel based methods, like SVMs have been successfully applied to a number of such problems. The main problem with the traditional SVM is its high algorithmic complexity which makes it infeasible for really large databases. LS-SVM solves this problem, but the resulting model is not sparse. Our solution uses a sparse and robust extension of LS-SVM, which leads to good results compared to other methods (such as MLPs) applied to the same problem. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-08-17

Source:

http://mycite.omikk.bme.hu/doc/15943.pdf

http://mycite.omikk.bme.hu/doc/15943.pdf Minimize

Document Type:

text

Language:

en

Subjects:

LS-SVM ; LS 2-SVM ; data preprocessing

LS-SVM ; LS 2-SVM ; data preprocessing Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

The Refinement of Microcalcification Cluster Assessment By Joint Analysis of MLO and CC views

Description:

Abstract. Most of the CAD Systems for Mammograms are composed of algorithms analysing the four X-ray images individually. It is a general experience, that algorithms in search of microcalcification clusters can obtain high sensitivity only if specificity is low. To overcome efficiency problem this paper proposes a simple algorithm to combine inf...

Abstract. Most of the CAD Systems for Mammograms are composed of algorithms analysing the four X-ray images individually. It is a general experience, that algorithms in search of microcalcification clusters can obtain high sensitivity only if specificity is low. To overcome efficiency problem this paper proposes a simple algorithm to combine information of the two views (MLO/CC) of the breast. The procedure is based upon the experiences of radiologists: masses and calcifications should emerge on both views, so if no matching is found, the given object is a false positive hit. A positioning system is evolved to find corresponding regions on the two images. Calcification clusters obtained in individual images are matched in “2.5-D ” provided by the positioning system. The credibility value of the hit is reassessed by the matching. The proposed approach can significantly reduce the number of false positive hits in calcification. 1 Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-01

Source:

http://www.mit.bme.hu/~horvath/papers/IWDM_Joint.pdf

http://www.mit.bme.hu/~horvath/papers/IWDM_Joint.pdf Minimize

Document Type:

text

Language:

en

DDC:

006 Special computer methods *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Extended Least Squares LS–SVM

Description:

Abstract—Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they eliminate certain crucial questions involved by neural network construction. The main drawback of standard SVM is its high computational complexity, therefore recently a new technique, the Least Squares SVM (LS–SVM) ha...

Abstract—Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they eliminate certain crucial questions involved by neural network construction. The main drawback of standard SVM is its high computational complexity, therefore recently a new technique, the Least Squares SVM (LS–SVM) has been introduced. In this paper we present an extended view of the Least Squares Support Vector Regression (LS–SVR), which enables us to develop new formulations and algorithms to this regression technique. Based on manipulating the linear equation set-which embodies all information about the regression in the learning process- some new methods are introduced to simplify the formulations, speed up the calculations and/or provide better results. Keywords—Function estimation, Least–Squares Support Vector Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2011-04-15

Source:

http://www.waset.org/journals/waset/v36/v36-60.pdf

http://www.waset.org/journals/waset/v36/v36-60.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Machines ; Regression ; System Modeling T

Machines ; Regression ; System Modeling T Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

CMAC Neural Network with Improved Generalization Capability for System Modelling

Author:

Description:

In system modelling, when there is not enough information to build physical models and where the knowledge available is in the form of input – output data, behavioural (input – output) or black-box modelling approach can be used. In black-box modelling neural networks play important role. Their importance comes from their modelling capability:

In system modelling, when there is not enough information to build physical models and where the knowledge available is in the form of input – output data, behavioural (input – output) or black-box modelling approach can be used. In black-box modelling neural networks play important role. Their importance comes from their modelling capability: Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-17

Source:

http://www.mit.bme.hu/~horvath/papers/CMAC_Improved_gen.pdf

http://www.mit.bme.hu/~horvath/papers/CMAC_Improved_gen.pdf Minimize

Document Type:

text

Language:

en

Subjects:

input-output system modelling ; neural networks ; CMAC ; generalization error

input-output system modelling ; neural networks ; CMAC ; generalization error Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

A Robust LS-SVM Regression

Description:

Abstract—In comparison to the original SVM, which involves a quadratic programming task; LS–SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. Another problem is that LS-SVM is only optimal if the training samples are corrupted by Gaussian noise. In Least Squares SVM (LS–SVM), the nonlinear solutio...

Abstract—In comparison to the original SVM, which involves a quadratic programming task; LS–SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. Another problem is that LS-SVM is only optimal if the training samples are corrupted by Gaussian noise. In Least Squares SVM (LS–SVM), the nonlinear solution is obtained, by first mapping the input vector to a high dimensional kernel space in a nonlinear fashion, where the solution is calculated from a linear equation set. In this paper a geometric view of the kernel space is introduced, which enables us to develop a new formulation to achieve a sparse and robust estimate. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2011-07-19

Source:

http://www.waset.org/journals/waset/v7/v7-28.pdf

http://www.waset.org/journals/waset/v7/v7-28.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Least Squares Support Vector Machines ; Regression ; Sparse approximation

Least Squares Support Vector Machines ; Regression ; Sparse approximation Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Kernel CMAC: an Efficient Neural Network for Classification and Regression

Description:

Abstract: Kernel methods in learning machines have been developed in the last decade as new techniques for solving classification and regression problems. Kernel methods have many advantageous properties regarding their learning and generalization capabilities, but for getting the solution usually the computationally complex quadratic programmin...

Abstract: Kernel methods in learning machines have been developed in the last decade as new techniques for solving classification and regression problems. Kernel methods have many advantageous properties regarding their learning and generalization capabilities, but for getting the solution usually the computationally complex quadratic programming is required. To reduce computational complexity a lot of different versions have been developed. These versions apply different kernel functions, utilize the training data in different ways or apply different criterion functions. This paper deals with a special kernel network, which is based on the CMAC neural network. Cerebellar Model Articulation Controller (CMAC) has some attractive features: fast learning capability and the possibility of efficient digital hardware implementation. Besides these attractive features the modelling and generalization capabilities of a CMAC may be rather limited. The paper shows that kernel CMAC – an extended version of the classical CMAC network implemented in a kernel form – improves that properties of the classical version significantly. Both the modelling and the generalization capabilities are improved while the limited computational complexity is maintained. The paper shows the architecture of this network and presents the relation between the classical CMAC and the kernel networks. The operation of the proposed architecture is illustrated using some common benchmark problems. Keywords: kernel networks, input-output system modelling, neural networks, CMAC, generalization error Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-08-14

Source:

http://www.bmf.hu/journal/horvathgabor_5.pdf

http://www.bmf.hu/journal/horvathgabor_5.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Kernel machines like Support Vector Machines (SVMs) [1 ; Least Squares SVMs

Kernel machines like Support Vector Machines (SVMs) [1 ; Least Squares SVMs Minimize

DDC:

006 Special computer methods *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Kernel CMAC: an Efficient Neural Network for Classification and Regression

Description:

Abstract: Kernel methods in learning machines have been developed in the last decade as new techniques for solving classification and regression tasks. Kernel methods have many advantages properties regarding their learning and generalization capabilities, but for getting the solution usually the computationally complex quadratic programming is ...

Abstract: Kernel methods in learning machines have been developed in the last decade as new techniques for solving classification and regression tasks. Kernel methods have many advantages properties regarding their learning and generalization capabilities, but for getting the solution usually the computationally complex quadratic programming is required. To reduce computational complexity a lot of different versions have been developed. These different versions apply different kernel functions, utilize the training data in different ways or apply different criterion functions. This paper deals with a special kernel network, which is based on the CMAC neural network. Cerebellar Model Articulation Controller (CMAC) has some attractive features: fast learning capability and the possibility of efficient digital hardware implementation. Besides these attractive features the modelling and generalization capabilities of a CMAC are rather limited. The paper shows that kernel CMAC – an extended version of the classical CMAC network implemented in a kernel form – combines the advantages of both approaches. Its modelling and generalization capabilities are improved while the limited computational complexity is maintained. The paper shows the architecture of this network and presents the relation between the classical CMAC and the kernel networks. The operation of the proposed architecture is illustrated using some common benchmark problems. Keywords: kernel networks, input-output system modelling, neural networks, CMAC, generalization error Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-02-02

Source:

http://www.bmf.hu/conferences/mtn2005/horvathgabor.pdf

http://www.bmf.hu/conferences/mtn2005/horvathgabor.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Kernel machines like Support Vector Machines (SVMs) [1 ; Least Squares SVMs

Kernel machines like Support Vector Machines (SVMs) [1 ; Least Squares SVMs Minimize

DDC:

006 Special computer methods *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Modeling

Description:

Sensors, transducers, signal processing

Sensors, transducers, signal processing Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2010-08-26

Source:

http://www.mit.bme.hu/%7Ehorvath/LTP/LTP_slides.pdf

http://www.mit.bme.hu/%7Ehorvath/LTP/LTP_slides.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

A WEIGHTED GENERALIZED LS–SVM

Description:

Neural networks play an important role in system modelling. This is especially true if model building is mainly based on observed data. Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they automatically answer certain crucial questions involved by neural network construction. The...

Neural networks play an important role in system modelling. This is especially true if model building is mainly based on observed data. Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they automatically answer certain crucial questions involved by neural network construction. They derive an ‘optimal ’ network structure and answer the most important question related to the ‘quality ’ of the resulted network. The main drawback of standard Support Vector Machines (SVM) is its high computational complexity, therefore recently a new technique, the Least Squares SVM (LS–SVM) has been introduced. This is algorithmically more effective, because the solution can be obtained by solving a linear equation set instead of a computation-intensive quadratic programming problem. Although the gain in efficiency is rather significant, for really large problems the computational burden of LS-SVM is still too high. Moreover, an attractive feature of SVM, its sparseness is lost. This paper proposes a special new generalized formulation and solution technique for the standard LS-SVM. By solving the modified LS–SVM equation set in least squares (LS) sense (LS2 –SVM), a pruned solution is achieved, while the computational burden is further reduced (Generalized LS–SVM). In this generalized LS–SVM framework a further modification weighting is also proposed, to reduce the sensitivity of the network construction to outliers while maintaining sparseness. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-01

Source:

http://www.pp.bme.hu/ee/2003_3/pdf/ee2003_3_05.pdf

http://www.pp.bme.hu/ee/2003_3/pdf/ee2003_3_05.pdf Minimize

Document Type:

text

Language:

en

Subjects:

function estimation ; least squares support vector machines ; regression ; support vector machines ; system

function estimation ; least squares support vector machines ; regression ; support vector machines ; system Minimize

DDC:

006 Special computer methods *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Currently in BASE: 70,932,006 Documents of 3,416 Content Sources

http://www.base-search.net