public marks

PUBLIC MARKS with tag "machine learning"

2019

NSFW JS

by srcmax
Client-side indecent content checking

2018

2017

2016

2011

2008

Conditional Random Fields

by ogrisel (via)
Conditional random fields (CRFs) are a probabilistic framework for labeling and segmenting structured data, such as sequences, trees and lattices. The underlying idea is that of defining a conditional probability distribution over label sequences given a particular observation sequence, rather than a joint distribution over both label and observation sequences. The primary advantage of CRFs over hidden Markov models is their conditional nature, resulting in the relaxation of the independence assumptions required by HMMs in order to ensure tractable inference. Additionally, CRFs avoid the label bias problem, a weakness exhibited by maximum entropy Markov models (MEMMs) and other conditional Markov models based on directed graphical models. CRFs outperform both MEMMs and HMMs on a number of real-world tasks in many fields, including bioinformatics, computational linguistics and speech recognition.

Dbn_Tutorial

by ogrisel (via)
Topics: Energy models, causal generative models vs. energy models in overcomplete ICA, contrastive divergence learning, score matching, restricted Boltzmann machines, deep belief networks

Modular toolkit for Data Processing (MDP)

by ogrisel
Modular toolkit for Data Processing (MDP) is a Python data processing framework. Implemented algorithms include: Principal Component Analysis (PCA), Independent Component Analysis (ICA), Slow Feature Analysis (SFA), Independent Slow Feature Analysis (ISFA), Growing Neural Gas (GNG), Factor Analysis, Fisher Discriminant Analysis (FDA), Gaussian Classifiers, and Restricted Boltzmann Machines. Read the full list.

lasvm [Léon Bottou]

by ogrisel (via)
LASVM is an approximate SVM solver that uses online approximation. It reaches accuracies similar to that of a real SVM after performing a single sequential pass through the training examples. Further benefits can be achieved using selective sampling techniques to choose which example should be considered next. As show in the graph, LASVM requires considerably less memory than a regular SVM solver. This becomes a considerable speed advantage for large training sets. In fact LASVM has been used to train a 10 class SVM classifier with 8 million examples on a single processor.

An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation [PDF]

by ogrisel (via)
Recently, several learning algorithms relying on models with deep architectures have been proposed. Though they have demonstrated impressive performance, to date, they have only been evaluated on relatively simple problems such as digit recognition in a controlled environment, for which many machine learning algorithms already report reasonable results. Here, we present a series of experiments which indicate that these models show promise in solving harder learning problems that exhibit many factors of variation. These models are compared with well-established algorithms such as Support Vector Machines and single hidden-layer feed-forward neural networks.

YouTube - Visual Perception with Deep Learning

by ogrisel (via)
A long-term goal of Machine Learning research is to solve highly complex "intelligent" tasks, such as visual perception auditory perception, and language understanding. To reach that goal, the ML community must solve two problems: the Deep Learning Problem, and the Partition Function Problem. There is considerable theoretical and empirical evidence that complex tasks, such as invariant object recognition in vision, require "deep" architectures, composed of multiple layers of trainable non-linear modules. The Deep Learning Problem is related to the difficulty of training such deep architectures. Several methods have recently been proposed to train (or pre-train) deep architectures in an unsupervised fashion. Each layer of the deep architecture is composed of an encoder which computes a feature vector from the input, and a decoder which reconstructs the input from the features. A large number of such layers can be stacked and trained sequentially, thereby learning a deep hierarchy of features with increasing levels of abstraction. The training of each layer can be seen as shaping an energy landscape with low valleys around the training samples and high plateaus everywhere else. Forming these high plateaus constitute the so-called Partition Function problem. A particular class of methods for deep energy-based unsupervised learning will be described that solves the Partition Function problem by imposing sparsity constraints on the features. The method can learn multiple levels of sparse and overcomplete representations of data. When applied to natural image patches, the method produces hierarchies of filters similar to those found in the mammalian visual cortex. An application to category-level object recognition with invariance to pose and illumination will be described (with a live demo). Another application to vision-based navigation for off-road mobile robots will be described (with videos). The system autonomously learns to discriminate obstacles from traversable areas at long range.

DeepLearningWorkshopNIPS2007 < Public < TWiki

by ogrisel (via)
Theoretical results strongly suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need "deep architectures", which are composed of multiple levels of non-linear operations (such as in neural nets with many hidden layers). Searching the parameter space of deep architectures is a difficult optimization task, but learning algorithms (e.g. Deep Belief Networks) have recently been proposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This workshop is intended to bring together researchers interested in the question of deep learning in order to review the current algorithms' principles and successes, but also to identify the challenges, and to formulate promising directions of investigation. Besides the algorithms themselves, there are many fundamental questions that need to be addressed: What would be a good formalization of deep learning? What new ideas could be exploited to make further inroads to that difficult optimization problem? What makes a good high-level representation or abstraction? What type of problem is deep learning appropriate for? The workshop presentation page show selected links to relevant papers (PDF) on the topic.

YouTube - The Next Generation of Neural Networks

by ogrisel (via)
In the 1980's, new learning algorithms for neural networks promised to solve difficult classification tasks, like speech or object recognition, by learning many layers of non-linear features. The results were disappointing for two reasons: There was never enough labeled data to learn millions of complicated features and the learning was much too slow in deep neural networks with many layers of features. These problems can now be overcome by learning one layer of features at a time and by changing the goal of learning. Instead of trying to predict the labels, the learning algorithm tries to create a generative model that produces data which looks just like the unlabeled training data. These new neural networks outperform other machine learning methods when labeled data is scarce but unlabeled data is plentiful. An application to very fast document retrieval will be described.

Vincent Zoonekynd's Blog

by ogrisel
Blog on programming, machine learning and financial analysis

2007

ICML 2007 - PRELIMINARY VIDEOS FROM THE SPOT

by ogrisel (via)
The 24th Annual International Conference on Machine Learning is being held in conjunction with the 2007 International Conference on Inductive Logic Programming at Oregon State University in Corvallis, Oregon. As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn". At a general level, there are two types of learning: inductive, and deductive.

Elefant - What is Elefant

by ogrisel
Elefant (Efficient Learning, Large-scale Inference, and Optimisation Toolkit) is an open source library for machine learning Elefant include modules for many common optimisation problems arising in machine learning and inference. It is designed to be modular and easy to use. Framework provides easy to use python interface, which can be use for quick prototyping and testing inference algorithms.

Artificial Intelligence: A Modern Approach

by ogrisel
The leading textbook in Artificial Intelligence. Used in over 1000 universities in 91 countries (over 90% market share). The 85th most cited publication on Citeseer.

Temporal difference learning - Wikipedia, the free encyclopedia

by ogrisel (via)
Temporal difference learning is a prediction method. It has been mostly used for solving the reinforcement learning problem. "TD learning is a combination of Monte Carlo ideas and dynamic programming (DP) ideas." [2] TD resembles a Monte Carlo method because it learns by sampling the environment according to some policy. TD is related to dynamic programming techniques because it approximates its current estimate based on previously learned estimates (a process known as bootstrapping). The TD learning algorithm is related to the Temporal difference model of animal learning.

obousquet - ML Videos

by ogrisel
Online videos of talks or lectures about Machine Learning related topics

Journal of Machine Learning Research Homepage

by ogrisel & 1 other (via)
The Journal of Machine Learning Research (JMLR) provides an international forum for the electronic and paper publication of high-quality scholarly articles in all areas of machine learning. All published papers are freely available online.

IBM Research | IBM Haifa Labs| Machine learning for healthcare (EuResist)

by ogrisel (via)
Generative-discriminative Hybrid Technique We plan to use a technique that combines two kinds of learning algorithms: discriminative and generative. We plan to employ Bayesian networks in the generative phase, and SVM in the discriminative phase. Algorithms under the generative framework try to find a statistical model that best represents the data. The predictions are then based on the likelihood scores derived from the model. This category includes algorithms such as Hidden Markov Models (HMM) [1], Gaussian Mixture Models (GMM) [2] and more complicated graphical models such as Bayesian networks [3].

2006

Learning Information Extraction Rules for Semi-structured and Free Text - Soderland (ResearchIndex)

by bcpbcp (via)
A wealth of on-line text information can be made available to automatic processing by information extraction (IE) systems. Each IE application needs a separate set of rules tuned to the domain and writing style. WHISK helps to overcome this knowledge-engineering bottleneck by learning text extraction rules automatically. WHISK is designed to handle text styles ranging from highly structured to free text, including text that is neither rigidly formatted nor composed of grammatical sentences....

Active users

srcmax
last mark : 27/02/2019 11:12

bamthomas
last mark : 15/02/2018 17:25

François Hodierne
last mark : 19/12/2017 08:41

RETFU
last mark : 10/10/2017 11:48

jpcaruana
last mark : 06/06/2011 11:50

ogrisel
last mark : 23/10/2008 15:06

bcpbcp
last mark : 25/02/2006 16:24