Date: Wed, 10 Jan 07 Time: 08:40 - 09:40
Multimedia and Web 2.0: Challenge and Synergy
One of the hallmarks of Web 2.0 is to harness collective intelligence of developers and users. Web 2.0 is data driven with users’ added value. For multimedia, Flickr, YouTube, Google Maps, and eBlog are some early applications counted as Web 2.0. These applications require effective, collaborative content/metadata creation, management, sharing, and indexing to further improve user experience.
One main challenge that multimedia Web applications come up against is semantic tagging for easier information search and navigation. What Web 2.0 brings to the tagging process is a community of users who share similar interests. Community-based tagging is more likely to lead to a shared vocabulary that is both originated by and familiar to its primary users. In this talk, I present my recent work with collaborators on community-based tagging. More specifically, we propose a scalable tagging strategy with three components: 1) unified learning paradigm (ULP), 2) parallel kernel machines, and 3) social-network data mining.
The first component ULP is motivated by how human being acquires knowledge: we learn by being taught (supervised learning), by self-study (unsupervised learning), by asking questions (active learning), and by being examined for the ability to generalize (reinforcement learning). ULP substantially reduces the amount of training data needed to perform semantic tagging by leveraging unlabeled data and by maximizing the usefulness of labeled data. Our second effort on parallelizing kernel machines aims to substantially reduce the computational time of Support Vector Machines (SVMs) and Spectral Clustering algorithms. Our approaches of approximate matrix decomposition and parallel computing reduce not only computational time of training, but also memory requirement. The third component, social network mining, uses the discovered structures of social networks to improve tagging quality as well as prevent spams. My talk concludes with preliminary results and research directions.
Professor Edward Chang received his M.S. in Computer Science and PhD in Electrical Engineering at Stanford University in 1994 and 1999, respectively. He joined the department of Electrical & Computer Engineering at University of California, Santa Barbara, in September 1999. He received his tenure in March 2003, and was promoted to full professor of Electrical Engineering in 2006. His recent research activities are in the areas of machine learning, data mining, high-dimensional data indexing, and their applications to image databases, video surveillance, and Web mining. Recent research contributions of his group include methods for learning image/video query concepts via active learning with kernel methods, formulating distance functions via dynamic associations and kernel alignment, managing and fusing distributed video-sensor data, categorizing and indexing high-dimensional image/video information, and speeding up Support Vector Machines via parallel matrix factorization and indexing. Professor Chang has served on several ACM, IEEE, and SIAM conference program committees. He co-founded the annual ACM Video Sensor Network Workshop and has co-chaired it since 2003. In 2006, he co-chairs three international conferences: Multimedia Modeling (Beijing), SPIE/IS&T Multimedia Information Retrieval (San Jose), and ACM Multimedia (Santa Barbara). He serves as an Associate Editor for IEEE Transactions on Knowledge and Data Engineering and ACM Multimedia Systems Journal. Professor Chang is a recipient of the IBM Faculty Partnership Award and the NSF Career Award. He is currently on leave from UC, heading R&D effort at Google/China.
Date: Fri, 12 Jan 07 Time: 09:00 - 10:10
Is Creating Multimedia Content the Ultimate Web-2.0 Modeling Challenge
Much of the attention related to multimedia in Web-2.0 has been on methods to describe video, image and (occasionally) audio content. By providing a structured, community-based mechanism to collect keywords and tags, an approach to classifying media is proposed that is intended to compensate for the failure of media creators to adequately describe their content using any one of a wide range of existing metadata standards. The inherently social enterprise of community tagging presents a set of interesting challenges in a variety of fields – perhaps most importantly, quality control – but it seems to suffer from a critical flaw: it assumes that the activity of content description is inherently separated from the activity of content creation. We feel that a more holistic view of content manipulation is required.
In this talk, we consider the process of community content creation and viewing. We analyze the process of creating and extending media objects in light of seven principles that are often associated with Web-2.0 systems. These principles, which include harnessing collective intelligence and the creation of rich user experiences, are applied to the process of creating dynamic collections of audio/video/text/image content that is targeted for broad reuse during its lifetime. The dynamic aspects of such content can be described in terms of an incremental authoring process in which a presentation is enriched over time based on changes introduced by a diverse hybrid user/producer community, and exposed through sets of interwoven, conditionally-active content layers. Such a view on authoring provides a wide range of system and user modeling issues, as well as real-world legal issues related to the management of content rights. The goal of the talk is to examine both conventional notions of content authoring and some unconventional notions about Web-2.0 technology.
Several use cases of holistic multimedia presentation authoring using the Ambulant Annotator platform will be discussed – in domains ranging from home entertainment to medical applications – and a set of document model structuring extensions will be described that will allow for more flexible and temporally evolving presentations to be constructed.
Dr. Bulterman is a senior researcher and head of Distributed Multimedia Languages and Infrastructures at CWI, the Dutch national research center for mathematics and computer science in Amsterdam . He has held various research and management positions at CWI since arriving there in 1988. From 1998-2002, he was CEO and chief technical director of Oratrix Development in Amsterdam, a company dedicated to providing elegant solutions for under-appreciated multimedia authoring and document processing problems. Bulterman received his Ph.D. in computer science from Brown University in 1981. He is on the editorial boards of ACM TOMCCAP and the ACM/Springer Multimedia Systems Journal. He is current chair of the W3C Synchronized Multimedia working group and is co-author of the book SMIL 2.0: Interactive Multimedia for Web and Mobile Devices. Bulterman has been active in the fields of multimedia systems since 1990 and has been a frequent contributor to the MMM conference series. He lives in Amsterdam with his wife and two children.