Loading

Error: Cannot Load Popup Box

Hit List

Title:

Model-Independent Mean Field Theory as a Local Method for Approximate Propagation of Information

Author:

Description:

We present a systematic approach to mean field theory (MFT) in a general probabilistic setting without assuming a particular model and avoiding physical notation. The mean field equations derived here may serve as a local and thus very simple method for approximate inference in graphical models. In general, there are multiple solutions to the me...

We present a systematic approach to mean field theory (MFT) in a general probabilistic setting without assuming a particular model and avoiding physical notation. The mean field equations derived here may serve as a local and thus very simple method for approximate inference in graphical models. In general, there are multiple solutions to the mean field equations. We show that improved estimates can be obtained by forming a weighted mixture of the multiple mean field solutions. We derive simple approximate expressions for the mixture weights, which can also be obtained by means of only local computations. The benefits of taking into account multiple solutions are demonstrated by using MFT for inference in a small `noisy-or network'. 1 Introduction The benefits of using a probabilistic setting in many applied fields where uncertainty plays a prominent role --such as image processing, neural networks and artificial intelligence-- have become increasingly apparent [1]. Unfortunately, p. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-04-12

Source:

http://www7.informatik.tu-muenchen.de/~hofmannr/mf.ps.gz

http://www7.informatik.tu-muenchen.de/~hofmannr/mf.ps.gz Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Model-Independent Mean Field Theory as a Local Method for Approximate Propagation of Information

Author:

Description:

We present a systematic approach to mean field theory (MFT) in a general probabilistic setting without assuming a particular model. The mean field equations derived here may serve as a local and thus very simple method for approximate inference in probabilistic models such as Boltzmann machines or Bayesian networks. "Model-independent" means tha...

We present a systematic approach to mean field theory (MFT) in a general probabilistic setting without assuming a particular model. The mean field equations derived here may serve as a local and thus very simple method for approximate inference in probabilistic models such as Boltzmann machines or Bayesian networks. "Model-independent" means that we do not assume a particular type of dependencies; in a Bayesian network, for example, we allow arbitrary tables to specify conditional dependencies. In general, there are multiple solutions to the mean field equations. We show that improved estimates can be obtained by forming a weighted mixture of the multiple mean field solutions. Simple approximate expressions for the mixture weights are given. The general formalism derived so far is evaluated for the special case of Bayesian networks. The benefits of taking into account multiple solutions are demonstrated by using MFT for inference in a small and in a very large Bayesian network. The results are compared to the exact results. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-04-16

Source:

http://tresp.org/./papers/generalMFT.ps.gz

http://tresp.org/./papers/generalMFT.ps.gz Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Mean Field Inference in a General Probabilistic Setting

Author:

Description:

We present a systematic, model-independent formulation of mean field theory (MFT) as an inference method in probabilistic models. "Model-independent" means that we do not assume a particular type of dependency among the variables of a domain but instead work in a general probabilistic setting. In a Bayesian network, for example, you may use arbi...

We present a systematic, model-independent formulation of mean field theory (MFT) as an inference method in probabilistic models. "Model-independent" means that we do not assume a particular type of dependency among the variables of a domain but instead work in a general probabilistic setting. In a Bayesian network, for example, you may use arbitrary tables to specify conditional dependencies and thus run MFT in any Bayesian network. Furthermore, the general mean field equations derived here shed a light on the essence of MFT. MFT can be interpreted as a local iteration scheme which relaxes in a consistent state (a solution of the mean field equations). Iterating the mean field equations means propagating information through the network. In general, however, there are multiple solutions to the mean field equations. We show that improved approximations can be obtained by forming a weighted mixture of the multiple mean field solutions. Simple approximate expressions for the mixture weig. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-04-13

Source:

ftp://flop.informatik.tu-muenchen.de/pub/hofmannr/mf27.ps.gz

ftp://flop.informatik.tu-muenchen.de/pub/hofmannr/mf27.ps.gz Minimize

Document Type:

text

Language:

en

DDC:

531 Classical mechanics; solid mechanics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Model-Independent Mean Field Theory as a Local Method for Approximate Propagation of . . .

Author:

Description:

We present a systematic approach to mean field theory (MFT) in a general probabilistic setting without assuming a particular model. The mean field equations derived here may serve as a local and thus very simple method for approximate inference in probabilistic models such as Boltzmann machines or Bayesian networks. “Model-independent” means tha...

We present a systematic approach to mean field theory (MFT) in a general probabilistic setting without assuming a particular model. The mean field equations derived here may serve as a local and thus very simple method for approximate inference in probabilistic models such as Boltzmann machines or Bayesian networks. “Model-independent” means that we do not assume a particular type of dependencies; in a Bayesian network, for example, we allow arbitrary tables to specify conditional dependencies. In general, there are multiple solutions to the mean field equations. We show that improved estimates can be obtained by forming a weighted mixture of the multiple mean field solutions. Simple approximate expressions for the mixture weights are given. The general formalism derived so far is evaluated for the special case of Bayesian networks. The benefits of taking into account multiple solutions are demonstrated by using MFT for inference in a small and in a very large Bayesian network. The results are compared to the exact results. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2010-12-06

Source:

http://wwwbrauer.informatik.tu-muenchen.de/~trespvol/papers/generalMFT.pdf

http://wwwbrauer.informatik.tu-muenchen.de/~trespvol/papers/generalMFT.pdf Minimize

Document Type:

text

Language:

en

Subjects:

1

1 Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

normalization in the auditory system

Author:

Description:

Natural sound statistics and divisive

Natural sound statistics and divisive Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-01

Source:

http://www.cns.nyu.edu/pub/lcv/schwartz00a.pdf

http://www.cns.nyu.edu/pub/lcv/schwartz00a.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Natural Sound Statistics and Divisive Normalization in the Auditory System

Author:

Description:

We explore the statistical properties of natural sound stimuli preprocessed with a bank of linear filters. The responses of such filters exhibit a striking form of statistical dependency, in which the response variance of each filter grows with the response amplitude of filters tuned for nearby frequencies. These dependencies may be substantiall...

We explore the statistical properties of natural sound stimuli preprocessed with a bank of linear filters. The responses of such filters exhibit a striking form of statistical dependency, in which the response variance of each filter grows with the response amplitude of filters tuned for nearby frequencies. These dependencies may be substantially reduced using an operation known as divisive normalization, in which the response of each filter is divided by a weighted sum of the rectified responses of other filters. The weights may be chosen to maximize the independence of the normalized responses for an ensemble of natural sounds. We demonstrate that the resulting model accounts for non-linearities in the response characteristics of the auditory nerve, by comparing model simulations to electrophysiological recordings. In previous work (NIPS, 1998) we demonstrated that an analogous model derived from the statistics of natural images accounts for non-linear properties of neur. Minimize

Publisher:

MIT Press

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-04-15

Source:

http://www.cns.nyu.edu/pub/eero/schwartz00a.ps.gz

http://www.cns.nyu.edu/pub/eero/schwartz00a.ps.gz Minimize

Document Type:

text

Language:

en

DDC:

612 Human physiology *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

DOI 10.1007/s10994-010-5211-x Statistical relational learning of trust

Author:

Description:

Abstract The learning of trust and distrust is a crucial aspect of social interaction among autonomous, mentally-opaque agents. In this work, we address the learning of trust based on past observations and context information. We argue that from the truster’s point of view trust is best expressed as one of several relations that exist between th...

Abstract The learning of trust and distrust is a crucial aspect of social interaction among autonomous, mentally-opaque agents. In this work, we address the learning of trust based on past observations and context information. We argue that from the truster’s point of view trust is best expressed as one of several relations that exist between the agent to be trusted (trustee) and the state of the environment. Besides attributes expressing trustworthiness, additional relations might describe commitments made by the trustee with regard to the current situation, like: a seller offers a certain price for a specific product. We show how to implement and learn context-sensitive trust using statistical relational learning in form of a Dirichlet process mixture model called Infinite Hidden Relational Trust Model (IHRTM). The practicability and effectiveness of our approach is evaluated empirically on user-ratings gathered from eBay. Our results suggest that (i) the inherent clustering achieved in the algorithm allows the truster to characterize the structure of a trust-situation and provides meaningful trust assessments; (ii) utilizing the collaborative filtering effect associated with relational data does improve trust assessment performance; (iii) by learning faster and transferring knowledge more effectively we improve cold start performance and can cope better with dynamic behavior in open multiagent systems. The later is demonstrated with interactions recorded from a strategic two-player negotiation scenario. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-07-17

Source:

http://wwwbrauer.in.tum.de/~trespvol/papers/10.1007_s10994-010-5211-x.pdf

http://wwwbrauer.in.tum.de/~trespvol/papers/10.1007_s10994-010-5211-x.pdf Minimize

Document Type:

text

Language:

en

DDC:

006 Special computer methods *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

DOI 10.1007/s10994-010-5211-x Statistical relational learning of trust

Author:

Description:

Abstract The learning of trust and distrust is a crucial aspect of social interaction among autonomous, mentally-opaque agents. In this work, we address the learning of trust based on past observations and context information. We argue that from the truster’s point of view trust is best expressed as one of several relations that exist between th...

Abstract The learning of trust and distrust is a crucial aspect of social interaction among autonomous, mentally-opaque agents. In this work, we address the learning of trust based on past observations and context information. We argue that from the truster’s point of view trust is best expressed as one of several relations that exist between the agent to be trusted (trustee) and the state of the environment. Besides attributes expressing trustworthiness, additional relations might describe commitments made by the trustee with regard to the current situation, like: a seller offers a certain price for a specific product. We show how to implement and learn context-sensitive trust using statistical relational learning in form of a Dirichlet process mixture model called Infinite Hidden Relational Trust Model (IHRTM). The practicability and effectiveness of our approach is evaluated empirically on user-ratings gathered from eBay. Our results suggest that (i) the inherent clustering achieved in the algorithm allows the truster to characterize the structure of a trust-situation and provides meaningful trust assessments; (ii) utilizing the collaborative filtering effect associated with relational data does improve trust assessment performance; (iii) by learning faster and transferring knowledge more effectively we improve cold start performance and can cope better with dynamic behavior in open multiagent systems. The later is demonstrated with interactions recorded from a strategic two-player negotiation scenario. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2011-04-04

Source:

http://wwwbrauer.informatik.tu-muenchen.de/%7Etrespvol/papers/fulltext.pdf

http://wwwbrauer.informatik.tu-muenchen.de/%7Etrespvol/papers/fulltext.pdf Minimize

Document Type:

text

Language:

en

DDC:

006 Special computer methods *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Tree-Based Modeling and Estimation of Gaussian Processes on Graphs with Cycles

Author:

Description:

We present the embedded trees algorithm, an iterative technique for estimation of Gaussian processes defined on arbitrary graphs. By exactly solving a series of modified problems on embedded spanning trees, it computes the conditional means with an efficiency comparable to or better than other techniques. Unlike other methods, the embedded trees...

We present the embedded trees algorithm, an iterative technique for estimation of Gaussian processes defined on arbitrary graphs. By exactly solving a series of modified problems on embedded spanning trees, it computes the conditional means with an efficiency comparable to or better than other techniques. Unlike other methods, the embedded trees algorithm also computes exact error covariances. The error covariance computation is most efficient for graphs in which removing a small number of edges reveals an embedded tree. In this context, we demonstrate that sparse loopy graphs can provide a significant increase in modeling power relative to trees, with only a minor increase in estimation complexity. 1 Minimize

Publisher:

MIT Press

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-04-15

Source:

http://ssg.mit.edu/~mjwain/ETalg_nips00_withheader.ps.gz

http://ssg.mit.edu/~mjwain/ETalg_nips00_withheader.ps.gz Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Theory and practice, ” in Proc. 17th Int. Conf. Algorithmic Learn. Theory, 2006, pp. 319–333.

Author:

S. Kramer ; G. Widmer ; B. Pfahringer ; M. Degroeve ; R. Herbrich ; T. Graepel ; K. Obermayer ; Lrage Margin Rank ; V. E. Johnson ; J. H. Albert ; ...

S. Kramer ; G. Widmer ; B. Pfahringer ; M. Degroeve ; R. Herbrich ; T. Graepel ; K. Obermayer ; Lrage Margin Rank ; V. E. Johnson ; J. H. Albert ; Ordinal Data Modeling (statistics For ; A. Shashua ; A. Levin ; M. Almeida ; A. Braga ; J. Braga ; Svm-km Speeding Svms ; Z. Xu ; K. Yu ; V. Tresp ; X. Xu ; J. Wang ; Representative Sampling Minimize authors

Description:

[9] H. Lin and L. Li, “Large-margin thresholded ensembles for ordinal regression:

[9] H. Lin and L. Li, “Large-margin thresholded ensembles for ordinal regression: Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-07-25

Source:

http://dollar.biz.uiowa.edu/%7Estreet/fayed09.pdf

http://dollar.biz.uiowa.edu/%7Estreet/fayed09.pdf Minimize

Document Type:

text

Language:

en

Subjects:

data sets using support cluster machines ; ” in Proc. 14th Annu

data sets using support cluster machines ; ” in Proc. 14th Annu Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Currently in BASE: 69,696,404 Documents of 3,363 Content Sources

http://www.base-search.net