Loading

Error: Cannot Load Popup Box

Hit List

Title:

Robust Multi-Task Learning with t-Processes

Author:

Description:

Most current multi-task learning frameworks ignore the robustness issue, which means that the presence of “outlier ” tasks may greatly reduce overall system performance. We introduce a robust framework for Bayesian multitask learning, t-processes (TP), which are a generalization of Gaussian processes (GP) for multi-task learning. TP allows the s...

Most current multi-task learning frameworks ignore the robustness issue, which means that the presence of “outlier ” tasks may greatly reduce overall system performance. We introduce a robust framework for Bayesian multitask learning, t-processes (TP), which are a generalization of Gaussian processes (GP) for multi-task learning. TP allows the system to effectively distinguish good tasks from noisy or outlier tasks. Experiments show that TP not only improves overall system performance, but can also serve as an indicator for the “informativeness ” of different tasks. 1. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-01

Source:

http://wwwbrauer.informatik.tu-muenchen.de/~trespvol/papers/icml2007_tp.pdf

http://wwwbrauer.informatik.tu-muenchen.de/~trespvol/papers/icml2007_tp.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Blockwise supervised inference on large graphs

Author:

Description:

In this paper we consider supervised learning on large-scale graphs, which is highly demanding in terms of time and memory costs. We demonstrate that, if a graph has a bipartite structure that contains a small set of nodes separating the remaining from each other, the inference can be equivalently done over an induced graph connecting only the s...

In this paper we consider supervised learning on large-scale graphs, which is highly demanding in terms of time and memory costs. We demonstrate that, if a graph has a bipartite structure that contains a small set of nodes separating the remaining from each other, the inference can be equivalently done over an induced graph connecting only the separators. Since each separator influences a certain neighborhood, the method essentially explores the block structure of graphs to improve the scalability. In the next step, instead of identifying the bipartite structure in a given graph, which is often difficult, we propose to construct a set of separators via two methods, one is adjacency matrix factorization and the other is mixture models, which both naturally ends up with a bipartite graph and meanwhile preserves the original data structure. Finally we report results of experiments on a toy problem and an intrusion detection problem. 1. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-17

Source:

http://wwwbrauer.informatik.tu-muenchen.de/~trespvol/papers/block_graph_inference.pdf

http://wwwbrauer.informatik.tu-muenchen.de/~trespvol/papers/block_graph_inference.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Blockwise supervised inference on large graphs

Author:

Description:

In this paper we consider supervised learning on large-scale graphs, which is highly demanding in terms of time and memory costs. We demonstrate that, if a graph has a bipartite structure that contains a small set of nodes separating the remaining from each other, the inference can be equivalently done over an induced graph connecting only the s...

In this paper we consider supervised learning on large-scale graphs, which is highly demanding in terms of time and memory costs. We demonstrate that, if a graph has a bipartite structure that contains a small set of nodes separating the remaining from each other, the inference can be equivalently done over an induced graph connecting only the separators. Since each separator influences a certain neighborhood, the method essentially explores the block structure of graphs to improve the scalability. In the next step, instead of identifying the bipartite structure in a given graph, which is often difficult, we propose to construct a set of separators via two methods, one is adjacency matrix factorization and the other is mixture models, which both naturally ends up with a bipartite graph and meanwhile preserves the original data structure. Finally we report results of experiments on a toy problem and an intrusion detection problem. 1. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-17

Source:

http://www.dbs.informatik.uni-muenchen.de/~yu_k/block_graph_inference.pdf

http://www.dbs.informatik.uni-muenchen.de/~yu_k/block_graph_inference.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Knowledge

Author:

Description:

Social networks usually involve rich collections of objects, which are jointly linked into complex relational networks. Social network analysis has gained in importance due to the growing availability of data on novel social networks, e.g. citation networks, Web 2.0 social networks like facebook, and the hyperlinked internet. Recently, the infin...

Social networks usually involve rich collections of objects, which are jointly linked into complex relational networks. Social network analysis has gained in importance due to the growing availability of data on novel social networks, e.g. citation networks, Web 2.0 social networks like facebook, and the hyperlinked internet. Recently, the infinite hidden relational model (IHRM) has been developed for the analysis of complex relational domains. The IHRM extends the expressiveness of a relational model by introducing for each object an infinite-dimensional hidden variable as part of a Dirichlet process mixture model. In this paper we discuss how the IHRM can be used to model and analyze social networks. In such an IHRM-based social network model, each edge is associated with a random variable (RV) and the probabilistic dependencies between these RVs are specified by the model Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-04-07

Source:

http://wwwbrauer.informatik.tu-muenchen.de/~trespvol/papers/snakdd08srl.pdf

http://wwwbrauer.informatik.tu-muenchen.de/~trespvol/papers/snakdd08srl.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

1 VIPS: a Vision-based Page Segmentation Algorithm

Author:

Deng Cai ; Shipeng Yu ; Ji-rong Wen ; Wei-ying Ma ; Deng Cai ; Shipeng Yu ; Ji-rong Wen ; Wei-ying Ma

Description:

A new web content structure analysis based on visual representation is proposed in this paper. Many web applications such as information retrieval, information extraction and automatic page adaptation can benefit from this structure. This paper presents an automatic top-down, tag-tree independent approach to detect web content structure. It simu...

A new web content structure analysis based on visual representation is proposed in this paper. Many web applications such as information retrieval, information extraction and automatic page adaptation can benefit from this structure. This paper presents an automatic top-down, tag-tree independent approach to detect web content structure. It simulates how a user understands web layout structure based on his visual perception. Comparing to other existing techniques, our approach is independent to underlying documentation representation such as HTML and works well even when the HTML structure is far different from layout structure. Experiments show satisfactory results. 1 Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-08-15

Source:

http://research.microsoft.com/~jrwen/jrwen_files/publications/vips_technical report.pdf

http://research.microsoft.com/~jrwen/jrwen_files/publications/vips_technical report.pdf Minimize

Document Type:

text

Language:

en

Subjects:

VIPS ; a Vision-based Page Segmentation Algorithm

VIPS ; a Vision-based Page Segmentation Algorithm Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Knowledge

Author:

Description:

Social networks usually involve rich collections of objects, which are jointly linked into complex relational networks. Social network analysis has gained in importance due to the growing availability of data on novel social networks, e.g. citation networks, Web 2.0 social networks like facebook, and the hyperlinked internet. Recently, the infin...

Social networks usually involve rich collections of objects, which are jointly linked into complex relational networks. Social network analysis has gained in importance due to the growing availability of data on novel social networks, e.g. citation networks, Web 2.0 social networks like facebook, and the hyperlinked internet. Recently, the infinite hidden relational model (IHRM) has been developed for the analysis of complex relational domains. The IHRM extends the expressiveness of a relational model by introducing for each object an infinite-dimensional hidden variable as part of a Dirichlet process mixture model. In this paper we discuss how the IHRM can be used to model and analyze social networks. In such an IHRM-based social network model, each edge is associated with a random variable (RV) and the probabilistic Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-08-31

Source:

http://www.dbs.informatik.uni-muenchen.de/~spyu/paper/kdd2008_snakddWS.pdf

http://www.dbs.informatik.uni-muenchen.de/~spyu/paper/kdd2008_snakddWS.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

A Nonparametric Hierarchical Bayesian Framework For Information Filtering

Author:

Description:

Information filtering has made considerable progress in recent years. The predominant approaches are content-based methods and collaborative methods. Researchers have largely concentrated on either of the two approaches since a principled unifying framework is still lacking. This paper suggests that both approaches can be combined under a hierar...

Information filtering has made considerable progress in recent years. The predominant approaches are content-based methods and collaborative methods. Researchers have largely concentrated on either of the two approaches since a principled unifying framework is still lacking. This paper suggests that both approaches can be combined under a hierarchical Bayesian framework. Individual content-based user profiles are generated and collaboration between various user models is achieved via a common learned prior distribution. However, it turns out that a parametric distribution (e.g. Gaussian) is too restrictive to describe such a common learned prior distribution. We thus introduce a nonparametric common prior, which is a sample generated from a Dirichlet process which assumes the role of a hyper prior. We describe e#ective means to learn this nonparametric distribution, and apply it to learn users' information needs. The resultant algorithm is simple and understandable, and offers a principled solution to combine content-based filtering and collaborative filtering. Within our framework, we are now able to interpret various existing techniques from a unifying point of view. Finally we demonstrate the empirical success of the proposed information filtering methods. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2009-04-18

Source:

http://tresp.org/./papers/p276-yu.pdf

http://tresp.org/./papers/p276-yu.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Algorithms ; Theory ; Human Factors Keywords Collaborative Filtering ; Content-Based Filtering ; Dirichlet Process ; Nonparametric Bayesian Modelling

Algorithms ; Theory ; Human Factors Keywords Collaborative Filtering ; Content-Based Filtering ; Dirichlet Process ; Nonparametric Bayesian Modelling Minimize

DDC:

310 Collections of general statistics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Local Learning Projections

Author:

Description:

This paper presents a Local Learning Projection (LLP) approach for linear dimensionality reduction. We first point out that the well known Principal Component Analysis (PCA) essentially seeks the projection that has the minimal global estimation error. Then we propose a dimensionality reduction algorithm that leads to the projection with the min...

This paper presents a Local Learning Projection (LLP) approach for linear dimensionality reduction. We first point out that the well known Principal Component Analysis (PCA) essentially seeks the projection that has the minimal global estimation error. Then we propose a dimensionality reduction algorithm that leads to the projection with the minimal local estimation error, and elucidate its advantages for classification tasks. We also indicate that LLP keeps the local information in the sense that the projection value of each point can be well estimated based on its neighbors and their projection values. Experimental results are provided to validate the effectiveness of the proposed algorithm. 1. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-12-30

Source:

http://www.dbs.informatik.uni-muenchen.de/~spyu/paper/icml2007_llp.pdf

http://www.dbs.informatik.uni-muenchen.de/~spyu/paper/icml2007_llp.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Siemens Medical Solutions, USA

Author:

Description:

of growing interest in machine learning. Xu et al. (2006) introduced the infinite hidden relational model

of growing interest in machine learning. Xu et al. (2006) introduced the infinite hidden relational model Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-01

Source:

http://tresp.org/papers/ihrm_mlg07.pdf

http://tresp.org/papers/ihrm_mlg07.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

A Probabilistic Clustering-Projection Model for Discrete Data

Author:

Description:

Abstract. For discrete co-occurrence data like documents and words, calculating optimal projections and clustering are two different but related tasks. The goal of projection is to find a low-dimensional latent space for words, and clustering aims at grouping documents based on their feature representations. In general projection and clustering ...

Abstract. For discrete co-occurrence data like documents and words, calculating optimal projections and clustering are two different but related tasks. The goal of projection is to find a low-dimensional latent space for words, and clustering aims at grouping documents based on their feature representations. In general projection and clustering are studied independently, but they both represent the intrinsic structure of data and should reinforce each other. In this paper we introduce a probabilistic clustering-projection (PCP) model for discrete data, where they are both represented in a unified framework. Clustering is seen to be performed in the projected space, and projection explicitly considers clustering structure. Iterating the two operations turns out to be exactly the variational EM algorithm under Bayesian model inference, and thus is guaranteed to improve the data likelihood. The model is evaluated on two text data sets, both showing very encouraging results. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2008-07-01

Source:

http://www.dbs.informatik.uni-muenchen.de/Publikationen/Papers/pkdd05.pdf

http://www.dbs.informatik.uni-muenchen.de/Publikationen/Papers/pkdd05.pdf Minimize

Document Type:

text

Language:

en

DDC:

006 Special computer methods *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Currently in BASE: 68,072,316 Documents of 3,307 Content Sources

http://www.base-search.net