Some really cool visualizations of the Neocortex.
Monday, 26 October 2009
Saturday, 10 October 2009
Beau Lotto - Seeing Myself See: The Ecology of Mind
Talk by Beau Lotto over at Lotto Lab Studio (with whom I had the pleasure of sharing an exhibition with) includes moments of pouring skim milk into a fish bowl to demonstrate how we see color and demonstrations of possibly every illusion you've ever seen and then some.
Tuesday, 29 September 2009
Redwood Neuroscience Institute
If you haven't heard of it yet, check it out. They have made available a ton of their previous seminar videos and a great deal of them deal with vision as it is central to one of their goals. Some highlights include Dana Ballard, Jeff Hawkins, and Thomas Serre.
Labels:
berkeley,
redwood neuroscience institute,
videos
Trevor Darrell - Visual Recognition and Tracking for Perceptive Interfaces
http://www.researchchannel.org/prog/displayevent.aspx?rID=6939&fID=345
Devices should be perceptive, and respond directly to their human user and/or environment. In this talk I'll present new computer vision algorithms for fast recognition, indexing, and tracking that make this possible, enabling multimodal interfaces which respond to users' conversational gesture and body language, robots which recognize common object categories, and mobile devices which can search using visual cues of specific objects of interest. As time permits, I'll describe recent advances in real-time human pose tracking for multimodal interfaces, including new methods which exploit fast computation of approximate likelihood with a pose-sensitive image embedding. I'll also present our linear-time approximate correspondence kernel, the Pyramid Match, and its use for image indexing and object recognition, and discovery of object categories. Throughout the talk, I'll show interface examples including grounded multimodal conversation as well as mobile image-based information retrieval applications based on these techniques.
Devices should be perceptive, and respond directly to their human user and/or environment. In this talk I'll present new computer vision algorithms for fast recognition, indexing, and tracking that make this possible, enabling multimodal interfaces which respond to users' conversational gesture and body language, robots which recognize common object categories, and mobile devices which can search using visual cues of specific objects of interest. As time permits, I'll describe recent advances in real-time human pose tracking for multimodal interfaces, including new methods which exploit fast computation of approximate likelihood with a pose-sensitive image embedding. I'll also present our linear-time approximate correspondence kernel, the Pyramid Match, and its use for image indexing and object recognition, and discovery of object categories. Throughout the talk, I'll show interface examples including grounded multimodal conversation as well as mobile image-based information retrieval applications based on these techniques.
Labels:
gestures,
interfaces,
multimodal,
object categories,
recognition
Erik Sudderth - Learning Hierarchical, Nonparametric Models for Visual Scenes
http://www.researchchannel.org/prog/displayevent.aspx?rID=24390&fID=345
Computer vision systems use image features to detect and categorize objects in visual scenes. In this University of Washington program, learn about Erik Sudderth MIT/UC Berkeley research that explores hierarchical models using contextual and geometric relationships for more effective learning from large, partially labeled image databases.
Computer vision systems use image features to detect and categorize objects in visual scenes. In this University of Washington program, learn about Erik Sudderth MIT/UC Berkeley research that explores hierarchical models using contextual and geometric relationships for more effective learning from large, partially labeled image databases.
Labels:
context,
hdp-hmm,
machine learning,
object categories,
scenes
Tony Jebara - From Perception and Discriminative Learning to Interactive Behavior
http://www.researchchannel.org/prog/displayevent.aspx?rID=2748&fID=345
A strong symbiosis lies between machine learning and machine perception. Just as we learn to reason and interact with the world through our senses, a smart sensing system could acquire data to drive higher level learning problems. Ironically, learning and probabilistic methods themselves can provide the driving machinery for perception as well. I demonstrate several examples of probabilistic sensors in wearable and room-based environments. These human-centered systems perform object detection, face tracking, 3d modeling, recognition, and topic-spotting in real-time.
A strong symbiosis lies between machine learning and machine perception. Just as we learn to reason and interact with the world through our senses, a smart sensing system could acquire data to drive higher level learning problems. Ironically, learning and probabilistic methods themselves can provide the driving machinery for perception as well. I demonstrate several examples of probabilistic sensors in wearable and room-based environments. These human-centered systems perform object detection, face tracking, 3d modeling, recognition, and topic-spotting in real-time.
Aude Oliva - Understanding Visual Scenes in 200 msec: Results from Human and Modeling Experiments
http://www.researchchannel.org/prog/displayevent.aspx?rID=5953&fID=345
One of the remarkable aspects of human image understanding is that we are able to recognize the meaning of a novel image very quickly and independently of the complexity of the image. This talk will review findings in human perception that help us understand which mechanisms the human brain uses to achieve fast visual recognition, accurate visual search and adequate memorization of visual information. It also will describe the limits of human perception, as well as how to use our understanding of the pros and cons of these mechanisms for designing artificial vision systems and visual displays for human use.
One of the remarkable aspects of human image understanding is that we are able to recognize the meaning of a novel image very quickly and independently of the complexity of the image. This talk will review findings in human perception that help us understand which mechanisms the human brain uses to achieve fast visual recognition, accurate visual search and adequate memorization of visual information. It also will describe the limits of human perception, as well as how to use our understanding of the pros and cons of these mechanisms for designing artificial vision systems and visual displays for human use.
Subscribe to:
Posts (Atom)