Silicon Valley ACM SIGGRAPH Home
Silicon Valley ACM SIGGRAPH Next Event
Silicon Valley ACM SIGGRAPH Past Events
Silicon Valley ACM SIGGRAPH Resources
Silicon Valley ACM SIGGRAPH Volunteers
Silicon Valley ACM SIGGRAPH What's New
ACM SIGGRAPH

Silicon Valley ACM SIGGRAPH Past Events

2013 Events:

Thursday, February 21, 2013 Bring Your Own Animation (Join SF/SV ACM SIGGRAPH)

This hybrid event gives the San Francisco Bay Area ACM SIGGRAPH community the opportunity to show off their work. Bring Your Own Animation gives everyone the opportunity to show off their work and receive feedback. Whether you’re a student, amateur or junior leader, join us to meet the pros and screen your pieces in an informal setting.

Thursday, January 17, SIGGRAPH 2012 Computer Animation Festival (Electronic Theater)

The SIGGRAPH Computer Animation Festival is the annual festival for the world's most innovative, accomplished, and amazing digital film and video creators. An internationally recognized jury receives hundreds of submissions and presents the best work of the year in daily Festival Screenings and the Electronic Theater at the SIGGRAPH Conference. Selections include outstanding achievements in time-based art, scientific visualization, visual effects, real-time graphics, and narrative shorts. This will be a subset of what was shown at SIGGRAPH 2012.

2012 Events:

Thursday, November 8, Panasonic Silicon Valley Laboratory
By David Kryze, Andrea Melle, Yue Fei

At Siggraph 2012, Panasonic Silicon Valley Labs demonstrated a table top interface for new 3D interactive user experience. It enables virtual reality 3D graphics, 3D audio, natural free-hand 3D interactions, and it is a cloud based hub for personal devices to enable social interactions between multiple users. Possible applications include kiosks, virtual tourism, shopping, education, training, environment simulation, and data visualization.

In this event the team from Panasonic will talk about the rapid prototyping process of developing this demo, the process of building new user experience, and details of the technologies developed for this prototype. And you will have chance to play with this prototype in person.

David Kryze heads the Universal Design Group at Panasonic Silicon Valley Laboratory. He works with a multi-disciplinary team on the application of user centered design practices to the creation of new product concepts leveraging emerging technologies with a specific focus on user experience. Previously he worked on speech recognition research as a software engineer at Panasonic. He holds his M.Sc. degrees in Computer Science and Electrical Engineering from Ecole Polytechnique, Telecom Paristech, and the Eurecom Institute in France (1997-99).

Andrea Melle is a software engineer with experience in Human-Computer Interaction, Computer Vision and Computer Graphics. He is currently working in the Panasonic Silicon Valley Laboratory doing research and new concept development around natural user interfaces and interactive systems. Before that, he worked as a web/mobile designer and developer in an Italian creative firm. Andrea holds a B.Sc. and a M.Sc. in Cinema and Media Engineer from the Polytechnics of Turin, Italy (2009) and a M.Sc. in Multimedia Engineering from Institute Eurecom, France (2011).

Yue Fei is a lead engineer at Panasonic Silicon Valley Labs. He works on technologies enabling new user experience, user interaction. Previously he developed one of the first 3D Desktop user interface(2004). Yue Fei holds Ph.D. (2005) in Space Physics from Rice University with emphasis on 3D visualization and computational fluid dynamics. B.S.(1999) in Physics from Fudan University, China.

Thursday, October 25, Emotiv's EPOC Neuroheadset
by Adam Rizkalla, Emotiv Lifesciences
Kim Du, Emotiv

The Emotiv EPOC is a neuro-signal acquisition and processing wireless neuroheadset that allows its user to control their technology using their mind. Through facial expressions, emotional states and conscious thoughts, Emotiv is changing the way people interact with technology. Applications using the EPOC include medical research, hardware controllers, and gaming to name a few. Using the Emotiv SDK, developers can easily integrate the use of the Emotiv EPOC headset with their projects.

Adam Rizkalla graduated from Cal Poly, San Luis Obispo with a B.S. in Computer Engineering in June 2012, and is currently working as a Software Engineer at Emotiv Lifesciences. He is a partner and the lead software designer of a future music application using the Emotiv headset, titled OrchidTM. Former work includes development and debugging of logistics software for the U.S. military, firmware design for controls/communication of a rapid battery exchange unit for electric vehicles, and brief work on RFID integration for iPad devices.

Kim Du is Corporate and Developer Relations Director at Emotiv, where she is responsible for growing Emotiv technology into new applications and markets. Prior to working with Emotiv, Kim spent 12 years working in the Newspaper Industry for San Jose Mercury News, Contra Costa Times, and Media News Group.

Thursday, June 21, Nvidia Cloud Technology
by Thomas Ruge, Nvidia

NVIDIA recently announced a number of new products and technologies that enable visually rich and highly interactive applications in the cloud. This presentation discusses the challenges to put "3D in the cloud" and how they can be addressed with NVIDIA's new offerings. We will focus particulary on virtualization of GPUs and real-time video encoding/decode on GPUs and give a demonstration of NVIDIA's cloud technologies.

Thomas Ruge is currently working as a Software Manager at NVIDIA on 3D cloud technologies. Prior to his role at NVIDIA, he was the Co-Founder and CTO of ModViz, a venture capital backed startup in Oakland, CA, founded in 2004 and acquired by NVIDIA in 2008. Before his arrival in the US in 2001 he was responsible for the Virtual Reality Lab at the Siemens research labs in Munich, Germany and worked for multiple European research institutions in high energy and quantum physics. Thomas holds a Masters/Diploma in Physics from the Technical University in Munich, Germany and a Masters of Business Administration from Columbia Business School, NY. Living in Willow Glen, CA with his American wife and two bouncy dogs, he enjoys long dinners with friends and strangers over passionate discussions about anything and loves to tinker with any immature technology he can get his fingers on.

Thursday, January 19, 2012, The Tell-Tale Heart: Creating Your Dream Animated Film from Your Home Office

The Tell-Tale Heart Animated Movie

With the right skills, tools, and ambition you can create festival winning animated productions. Following the screening of the movie (16 mins.), I will walk you through my journey of creating an animated movie watched by thousands domestic and internationally. With the whole process outlined, from concept through creation and on through festivals and marketing, I discuss each step of the creation process with examples and demos from The Tell-Tale Heart development process. All the tips, tools, secrets, challenges, and short cuts learned in its creation are shared.

If you plan to create your own CG animated movie or are interested in the process beyond the marketing extras on the DVD's , this SIGGRAPH presentation is for you.

The Tell-Tale Heart Animated Movie:
"Best Horror Short" - Rhode Island International Film Festival, 2010
"Best Festival Director - Edgar Award" HP Lovecraft Film Festival, 2010
In 2011, over 400 copies sold to foreign and domestic schools for teaching Edgar Allan Poe's classic short story.

www.TheTell-TaleHeart.com

Michael Swertfager received his degree in business management from San Jose State University. Over the past decade, he worked as a project manager at Cisco Systems and Apple on enterprise application and e-commerce development. At night he attended the Academy of Art in SF and Cogswell Polytech, studying traditional art and computer animation. Living in Santa Cruz, CA with his wife, kids, and dog, he enjoys scuba diving and bring Poe classics to life through animation and applications.

2011 Events:

Thursday December 8, 2011 SIGGRAPH 2011 Electronic Theater

SIGGRAPH Electronic Theater stands alone in curating and showcasing the very best of computer animation since its inception. Every year, highly respected jurors choose from among hundreds of submissions to select the year's best computer animations, to be shown at the SIGGRAPH annual conference.

Continually raising the bar in terms of technical ability and level of ambition, studios, researchers, artists, students and other animation fanatics have contributed another year's worth of their most impressive accomplishments.

Silicon Valley ACM SIGGRAPH continues its 2011-2012 season with the legendary Electronic Theater event. Starting with the cream of animation shorts submitted from all around the world, a jury composed of the animation elite has selected the best to bring them together on the silver screen of the SIGGRAPH 2011 Electronic Theater. Nearly 30 animations, from the dramatic to the amusing, will amaze you with their content, their level of technical skill and their ground-breaking artistic vision.

ACM SIGGRAPH

SIGGRAPH Video Review is the world's most widely circulated video-based publication. Since 1979, SIGGRAPH Video Review has illustrated the latest concepts in computer graphics and interactive techniques.

More than one hundred programs provide an unequaled opportunity to study advanced computer graphics theory and applications. SIGGRAPH Video Review is the primary instrument for academic publishing and distribution of new work in the field of computer graphics and an important resource tool for scientists, engineers, mathematicians, artists, filmmakers and other computer graphics professionals.

Wednesday, November 16, 2011 ACM: WeVideo Leveraging the Cloud, Mobile, and Social to Transform Video Editing
by Jostein Svendsen

From 2005 to 2010, online video grew at a phenomenal rate of 910%, yet the majority of videos shared have been in their raw, unedited format. The barriers of cost, complexity, and required computing resources have stood in the way. WeVideo is the first to conquer this and deliver a full-featured, social video editor over the Internet and in a SaaS model. In our presentation, we will provide insights into how we are harnessing technology in new and innovative ways to deliver “eye-popping” advantages to users. We will discuss our patent-pending technology and architecture and how we have grown WeVideo from a little company in Norway serving the K-12 education market to THE Company transforming online video editing.

Technical and functional advantages of our approach:

  • Real-time. Real-time multi-layer playback in the browser without rendering time.
  • Social editing. Video editing in context to groups, projects or events -- enabling sharing of clips and social editing within the community.
  • Performance. Harnessing of low-end servers through proprietary parallelization technology to render editing jobs across distributed servers and CPU cores.
  • Flexibility. One rendering code across devices (web, tablet, mobile and TV), as well as the technology tier (client- and server-side).
  • Creative freedom. Easy-to-use drag and drop environment or WeVideo Wizard environment for a fully automated experience.
  • Export quality. From 360p to 1080p HD.

And…through our product demonstration, you will get a first hand view to how easy video story creation can be.

Jostein Svendsen is the Chief Executive Officer and Founder of WeVideo. Based in Oslo, Norway, and Silicon Valley, Jostein is a highly regarded serial entrepreneur in both Europe and North America, having founded and grown several successful companies in digital media, digital financial services and digital commerce. Jostein founded two of Norway’s leading multimedia and Internet companies, both of which went on to become successful publicly traded issuers in Sweden – one employing more than 2,000 people across the globe.

In 2002, Jostein helped launch a new credit card and savings bank in Norway, which quickly became the first branchless/online-only bank in the Norwegian market. The bank was acquired by Banco Santander in 2005.

Immediately before he became CEO of Creaza[CG1] , Jostein concentrated on investments and projects in digital media and online financial services, with an emphasis on guiding companies to become extended global enterprises. He has advised organizations in the public and private sectors on both digital and traditional business practices for companies in banking, insurance, telecommunications, energy and media.

About the Company: WeVideo WeVideo is redefining video editing. The Company offers a cloud-based video “story telling” platform that provides complete creative control and uniquely enables collaboration. At the same time, WeVideo is taking down the barriers typically associated with video editing, namely cost, complexity, and heavy computing requirements. Video creation can be done from your smart phone, tablet or computer, making video story telling convenient and timely. WeVideo has tackled the challenge of rendering time, speeding editing, and enables finished quality ranging from Internet to HD broadcast. Whether using video for personal, business or professional stories, WeVideo makes video story telling accessible to everyone.

Thursday, October 20, 2011 Augmented Reality: State of the Art and Future Directions & Company Tour
by Ryan Ismert. http://sportvision.com

Sportvision is best known for our augmented reality special effects seen on sports broadcasts, such as the virtual yellow “1st and 10” line for football, or the KZone™ strike zone on ESPN baseball. Our systems, however, span a broad range of capabilities beyond rendering, including a spectrum of camera tracking technologies, object and player tracking, and sports data analysis. Come take a tour and see our current technologies in action as well as a brief presentation on future directions for Sportvision and Augmented Reality in general.

Ryan is the General Manager for Augmented Reality at Sportvision, where he is responsible for growing Sportvision technology into new applications and markets. Prior to his current role, the majority of Ryan’s eight years at the company was spent as Director of Engineering, leading a variety of initiatives including advanced camera tracking and rendering technologies and system optimization / architecture. His prior experience includes stints as startup founder, consultant to the leveraged buyout industry and the island of Aruba, and in Naval Intelligence. He holds an advanced degree in image-based modeling and rendering from Cornell University. Ryan is a frequent speaker at industry events on graphics, broadcast video processing, and augmented reality.

About Sportvision:
Sportvision, Inc. is the nation's premier innovator of sports and entertainment products for fans, media companies and marketers. Sportvision solutions have enhanced experiences for fans and marketing partners of the NFL, MLB, NASCAR, The Olympic Games, NHL, PGA TOUR, LPGA Tour, NBA, NCAA, WTA, MLS, IRL, X Games and other sporting events On-Air and online. As sports fans demand richer and fuller entertainment experiences, Sportvision delivers a heightened sports-viewing experience across all forms of media. Sportvision has deployed products across live television, internet and iTV for all major sports on all major U.S. networks, including ESPN, ABC, Fox, CBS, NBC, USA, Turner, NFL Network, CBC (Canada), Seven Network (Australia), TV Asahi (Japan), MBC (Korea).

Thursday, June 30, 2011 Stanford Virtual Human Interaction Lab
by Kathryn Segovia. http://vhil.stanford.edu/

Identity is highly manipulable in avatar-based media.  Users experience not only an increased ability to manipulate their own identities via avatars but an increased likelihood that their digital identities may be controlled by another person or algorithm.  In everyday life people are presumed to be agents of their actions, but when actions are de-coupled from identity this assumption begins to break down. Kathryn's talk will focus on the areas of theory that inform this increasingly more prevalent phenomenon and overview a few studies that reveal how individuals respond to specific types of identity manipulation.

Kathryn Segovia is a PhD candidate in Communication at Stanford University.  She completed her bachelor's degree in Communication and her master's degree in Psychology both at Stanford before returning as a PhD student to further pursue her research interests in the Virtual Human Interaction Lab at Stanford.  Her research focuses on identity manipulation in avatar-based interactions and the punitive responses to such behavior.  Following graduation, Kathryn hopes to pursue a career in trial consulting or academia.

Thursday June 2, 2011 Working on Kinect
by Johnny Lee, Rapid Evaluator, Google, Inc.

In the first 60 days after launch, Kinect for Xbox 360 shipped over 8 million units earning it the title of "Fastest Selling Consumer Device" by the Guinness Book of World Records  It is arguably one of Microsoft’s most ambitious recent undertakings, pushing contemporary limits of hardware manufacturing, real-time computer vision, user interface concepts, and traditional software engineering practices. This talk will chronicle some of my experiences working as a core researcher on this project from early incubation to product release, lessons learned, and difficult decisions along the way.

In 2008, Lee graduated with a Ph.D in Human Computer Interaction from Carnegie Mellon University were he explored a variety of technologies to enhance the way we interact with computing devices.  His projects ranged from multi-touch, haptics, immersive displays, brain-computer interaction, advanced projection technologies, augmented reality, and motion capture.  His videos demonstrating how to create low-cost interactive whiteboards and 3D displays using a Nintendo Wii remote have accumulated over 15 million views on YouTube, and garnered the recognition of the prestigious TR35 awarded to the world's top 35 innovators under the age of 35 in 2008. In mid-2008, Lee joined Microsoft as a researcher in the Applied Sciences Group where he focused his efforts on helping Xbox develop Kinect, a controller free motion capture system.  In early 2011, Lee joined Google as a Rapid Evaluator.

Thusrday May 26, 2011
Morpheme Advanced Animation System
by Christoph Birkhold

Morpheme is a game animation engine and tool-chain targeting in particular high quality character animation. A key benefit is that it moves the runtime animation authoring process away from code or text files and into the hands of animators using the morpheme:connect graphical UI. The morpheme:connect application presents the user with an intuitive GUI that allows the creation of animation state machines and blend trees, whilst providing accurate preview of in-game motion. Advanced procedural animation techniques can also be authored within morpheme:connect while maintaining direct artist control. Morpheme is fully integrated with NVIDIA PhysX, allowing character animation and physics simulation to be seamlessly combined and finally Euphoria, NaturalMotions unique Dynamic Motion Synthesis technology used in games like GTA IV, The Force Unleashed and Red Dead Redemption, is now fully integrated into the morpheme:connect authoring environment taking real-time procedural animation even further.

Christoph Birkhold is currently Head of Field Engineering at NaturalMotion.

Thusrday April 21, 2011

2010 SIGGRAPH Asia Electronic Theater Screening

presented by Silicon Valley ACM Siggraph

At this screening, we will be presenting a special version of the Electronic Theater from SIGGRAPH Asia 2010 Conference, one of the world's most prestigious film extravaganzas. The Theater showcases the very best work in the world that advances animation, visualization, real-time rendering, and entertainment via computer animation.

Tuesday March 8, 2011

Stereoscopic 3D Vision: Looking at the Next Decade
by Sunil Jain

After multiple decades and fizzling, 2010 felt like a real inflection point for Stereoscopic-3D. Significance of this major transition in visualization from 2D-to-3D is profound. Nature has equipped human kind with binocular vision - it is the limitation of technology that has kept us staring fixated in front of 2D screens. This next decade could very well be a step towards realism in viewing and touching.

Content is limited. Solutions are expensive. Technologies are vying for the top place in each market segment. Standards and interoperability are in the infancy and tug of war between verticals and horizontals has already begun! The algorithms that can accurately emulate the depth perception and visualization processes that are natural to human brain are pretty much non-existent, and this is the holy grail of S3D quality and the skepticism 3D encounters after failed attempts in the past. Mechanisms of content creation, processing, rendering, and consuming will need to change - positing growth opportunities for everyone in the S3D food chain. Let's discuss how this next decade with S3D looks like.

Sunil Jain is Lead Architect and Strategist at Intel. As part of the PC Client Group, Sunil is responsible for bringing innovative technologies to PC platforms such as Desktops, Notebooks, and handheld devices.

Sunil joined Intel in 1999 and has played several roles: chip Architect, platform Architect, Director of Strategic Technology Programs, Manager of the Video and Display Architecture teams, and now he is the Lead Architect and Strategist. Sunil started his career in 1985 at Siemens Medical and served as founder-managing-director of Span Mechnotronix from 1991-98. Sunil has multiple patents and innovations to his credit including world's first PMIC for X-86 systems, and first true universal 3D glasses that work for Active and Passive and PC and CE in many usage scenarios.

Thursday March 10, 2011

Evolution of Digital Culture: 3D Trends Marketers Need to Care About
by Kate Ertmann

The advantage of using 3D in marketing is to give representation to concepts and ideas that are difficult to explain with words or pictures alone. Examples will be shown to support studies that viewers brains behave differently when viewing 3D. How about the 'gamification' of all things, where games are branded as entertainment to create immersive brand experiences? And how about 3D augmented reality on mobile devices? Creating a new kind of experience creates opportunities to connect with new customers, as well as leveraging an 'old' medium - print - as the bridge to the new one. We'll discuss how 3D, gaming and augmented reality together create a powerful marketing opportunity (it's been done); creating something that is also now shareable. 3D brings the experience you want to create for consumers at each step of the process: awareness, education, persuasion, affinity. Plus: The animator gang sign will be revealed!

A child actor for kid's television programming and commercials, Kate Ertmann renounced her Hollywood hopes in favor of a telecommunications degree from Ohio University. She produced the independent feature film "Pop", is one of the founders of the Portland chapter of Women in Animation, and is an active member of the Portland Rotary Club. She also serves on the board of Bradley Angle, the oldest domestic violence services organization on the west coast. Kate became a partner at ADi in 2000, and in 2008 became sole owner of Animation Dynamics, leading the company to produce innovative animation for a myriad of clients, and a variety of marketing, advertising, educational and training needs. Kate's favorite food is buttered popcorn.

Thursday Jan 20, 2011

Khronos Technology Presentations
by Khronos Group

Khronos Group is very pleased to host the ACM Silicon Valley Chapter January 2011 meeting. We will open the session with a few demos and refreshments, and many of the Khronos Group Work Group chairs will be on hand to greet you. After normal ACM meeting business is addressed, our presentation will give an overview of Khronos technology with a special focus on OpenVG. We are eager to customize our presentation to address any technology or topics that are of interest to you, and to make that easy, the simple registration form below allows us to both get a proper catering headcount and an opportunity for you to request topics or ask questions which we can address in our presentation. We look forward to meeting you there!

More details.

2010 Events:

Thursday, December 9th, 2010

SIGGRAPH 2010 Electronic Theater
by SIGGRAPH Silicon-Valley Chapter


Thursday, November 18, 2010

Waterbending: Water Effects on "The Last Airbender"
by Chris Twigg & Ian Sachs, Industrial Light & Magic (ILM)

For waterbending effects on "The Last Airbender", Industrial Light & Magic needed a water simulation pipeline that could simultaneously allow for a high degree of animator control, while still providing believable fluid behavior. To this end, we developed several new pieces of technology to complement our existing Particle Levelset and FLIP-PIC simulation engines. "Shape constraints" provided artists with extremely tight, geometry-based PLS simulation controls, while a new grid-based surface tension calculation provided improved particle structure for splashing water effects. Finally, a node-graph based editing system was developed for seamlessly combining and rendering the resulting large volumetric and particle datasets.

Chris Twigg finished his Ph.D. with Doug James at Carnegie Mellon in 2008 and has been working at ILM ever since on cloth, hair, flesh, and water simulation.

Ian Sachs earned his BSE in Computer Science from Princeton University in 1997 and has been working in the visual effects industry for the past 8 years, focusing on smoke, fire, and water simulation.

Thursday, October 21, 2010

The Camera of the Future
by Todor Georgiev

Recently we and others have gained deeper understanding of the fundamentals of the plenoptic camera and Lippmann sensor. As a result, we have developed new rendering approaches to improve resolution, remove artifacts, and render in real time. By capturing multiple modalities simultaneously, our camera captures images that are focusable after the fact and which can be displayed in multi view stereo. The camera can also be configured to capture HDR, polarization, multispectral color and other modalities. With superresolution techniques we can even render results that approach full sensor resolution. During our presentation we will demonstrate interactive real time rendering of 3D views with after the fact focusing.

Related technologies: Integral Photography, Light Field, Multi-Aperture Sensor, TOMBO, Panoptes.

Todor Georgiev is a senior research scientist at Adobe Systems, working closely with the Photoshop group. Having PhD in theoretical physics, he concentrates on applications of mathematical methods taken from physics to image processing, graphics, and vision. He is the author of the Healing Brush tool in Photoshop, the method better known as Poisson image editing. Currently he is working on a range of ideas related to plenoptic cameras and capture/manipulations of the radiance, that extend photography from 2D to 3D. He has a number of papers and patents in these areas.
http://www.tgeorgiev...

Co-author Andrew Lumsdaine received the PhD degree in electrical engineering and computer science from the Massachusetts Institute of Technology in 1992. He is presently a professor of computer science at Indiana University, where he is also the director of the Open Systems Laboratory. His research interests include computational science and engineering, parallel and distributed computing, mathematical software, numerical analysis, and radiance photography. He is a member of the IEEE, the IEEE Computer Society, the ACM, and SIAM.

Thursday, June 17th

HTML5: Web Development to the Next Level
by Brad Neuberg

HTML5 brings a wealth of new functionality for web applications, including drag and drop, offline, file access, geolocation, and more. In addition, CSS3 introduces new layout options and animations that can make your web applications more professional and easier to put together. Associated standards like WebGL and SVG are making the web even more powerful. Come and learn how to use these new technologies in your own applications!

Brad Neuberg is a software engineer at Google on the Buzz team. Previously he was a developer advocate at Google for the Open Web and HTML5. He is the creator of a number of JavaScript libraries and frameworks for expanding the capabilities of web applications, including Dojo Storage, Dojo Offline, Really Simple History, and SVG Web. Brad worked with Douglas Engelbart on the HyperScope project; explored deeply collaborative web browsers with Paper Airplane; worked on one of the first web-based RSS aggregators; and was a Developer Advocate for the Gears project. Brad also created Coworking, an international grassroots movement to establish a new kind of workspace for the self-employed.

Brad Neuberg's blog and twitter account:
http://blog.codinginparadise.org/
http://twitter.com/bradneuberg

Slides from the presentation:
http://apirocks.com/html5/html5.html

Other references:
http://caniuse.com/
Shows which features are available for which browser

http://www.inkscape.org/
An Open Source SVG (vector graphics) editor.

http://www.modernizr.com/
A JavaScript library for new web technologies and legacy support.

http://smokescreen.us/
An open-source project for converting Flash to JavaScript/HTML5.

http://diveintohtml5.org/
On-line book detailing some features.

Thursday, May 13th 2010

Exponent, Inc. Litigation Graphics Presentation
by Debanu Das

Christopher Espinosa will present an introduction to Exponent Engineering and Scientific Consulting and the role of the Visual Communication team. We combine cutting-edge artistic talent with engineering and scientific expertise to create visually compelling and technically accurate graphics and animations for industrial and legal applications. Chris will show some examples of the team's work as well as discussing some of the challenges and breakthroughs in forensic and demonstrative animation and exhibits.

Kate Pittman and Gil Matityahu will give a demonstration on importing and working with data from engineering sources in 3D Studio Max Design. Photogrammetry, the extraction of three dimensional information from photographs, will be the main focus of her demonstration. She will discuss the entire process including how to take and use photographs for photogrammetry, processing the photos in Photomodeler, and exporting the data to 3D Studio Max to create a compelling and convincing accurate model of a scene. They will also briefly discuss the use of other types of data, including survey information and motion simulation data from HVE.

PRESENTERS' BIO:
Christopher Espinosa manages Exponent's graphics, video, animation, multimedia, and photography efforts, and demonstrative evidence. In addition, he supports the scientific, engineering, and marketing staff in the production and presentation of technical information. Christopher has many years of experience in the field of visual arts and has served in such positions as Multimedia Architect for Hewlett Packard, Inc., Senior Multimedia Animator for Lippincott Williams & Wilkins Publishers and as a Freelance Artist for the American Heart Association, Wolters Kluwer Publishers, and HP Labs. Christopher has a B.A. in Fine Arts/Computer Graphics, Rutgers University, 1997

Kathleen Pittman is responsible for the creation of animation, graphics, video, multimedia, and diagrams used as demonstrative evidence and marketing materials. In addition, she supports the scientific, engineering, and marketing staff in the production and presentation of technical information and assists with 3 dimensional analysis, laser scan data processing, and Photogrammetry.

Ms. Pittman is proficient in 3D Studio Max, AutoCAD, Leica CloudWorx, Adobe, Illustrator, Photoshop, Flash, After Effects, Premiere, PhotoModeler, and ZBrush. Her experience prior to Exponent includes design work for educational materials and city planning firms. Kate has a B.S. in Media Arts and Animation, Art Institute of Philadelphia, 2006

Gil Matityahu's experience includes 3D animation & modeling, post video production, 2D illustration and image manipulation, scripting for various platforms. He accurately reconstructs scaled 3D models of buildings, vehicles, machines, landscapes, and other objects. He has worked on projects that include computer animation of accident recreations and simulations, court boards, interactive presentations, and posters. Gil has a B.S. in Media Arts and Animation, Art Institute of Philadelphia (with honors), 2005

Thursday April 15, 2010

Computer Graphics in Protein X-ray Crystallography and Drug Discovery
by Debanu Das

The atomic level structure determination of proteins by x-ray crystallography involves concepts and applications from several different scientific and engineering fields. The detailed analysis of molecular structure is used in understanding how proteins function and this has applications in drug discovery and protein engineering and design. Computer graphics is extensively used in biomolecular model building and the analysis of protein structure-function relationships and ligand interactions. A brief introduction to protein crystallography will be provided and the use of computer graphics will be illustrated by discussing some specific examples.

Debanu Das has been a Staff Scientist in the Structural Molecular Biology group at the Stanford Synchrotron Radiation Lightsource at SLAC National Accelerator Laboratory since 2007, working on high-throughout protein structure determination by x-ray crystallography. Debanu trained in protein crystallography during his postdoctoral and doctoral research at the Lawrence Berkeley National Laboratory from 2004 to 2007 and at Rutgers University from 1998 to 2004, respectively. Prior to that, he obtained his undergraduate degree in engineering and science, majoring in Chemistry, from the Indian Institute of Technology at Kanpur, India.

Tuesday March 16, 2010

Silicon Valley ACM SIGGRAPH Special Event:
Waking Sleeping Beauty Movie Screening -- Members Only Event

This event is in addition to the regular March Silicon Valley ACM SIGGRAPH Chapter.

We are happy to announce that limited number of free tickets will be available to Silicon Valley ACM SIGGRAPH Members for a special San Francisco Screening of "Waking Sleeping Beauty" prior to it's theatrical release.

Details about this wonderful film and it's upcoming theatrical release are available below.

Waking Sleeping Beauty Tickets for ACM SIGGRAPH Silicon Valley Chapter Members
Tuesday, March 16
4:00 PM
San Francisco, CA

These tickets will be distributed on a first-come first-served basis--
To Attend: You must RSVP by noon March 7th, 2010 --All tickets have been distributed

Once the RSVPs have been gathered those receiving tickets will receive a confirmation e-mail which will include the details of the screening location. As we will not be able to accommodate everyone who RSVPs please only do so if you will definitely attend the screening.

WAKING SLEEPING BEAUTY
From 1984 to 1994, a perfect storm of people and circumstances Waking Sleeping Beauty is no fairytale. It is a story of clashing egos, out of control budgets, escalating tensions... and one of the most extraordinary creative periods in animation history. Director Don Hahn and producer Peter Schneider, key players at Walt Disney Studios Feature Animation department during the mid1980s, offer a behind-the-magic glimpse of the turbulent times the Animation Studio was going through and the staggering output of hits that followed over the next ten years. Artists polarized between the hungry young innovators and the old guard who refused to relinquish control, mounting tensions due to a string of box office flops, and warring studio heads create the backdrop for this fascinating story told with a unique and candid perspective from those that were there. Through interviews, internal memos, home movies, and a cast of characters featuring Michael Eisner, Jeffrey Katzenberg, and Roy Disney, alongside an amazing array of talented artists that includes Don Bluth, John Lasseter, and Tim Burton, Waking Sleeping Beauty shines a light on Disney Animation��s darkest hours, greatest joys and its improbable renaissance. An Official Selection at the 2009 Telluride Film Festival, Toronto International Film Festival and winner of the Audience Award at the Hamptons International Film Festival, Waking Sleeping Beauty is directed by Don Hahn, and produced by Peter Schneider and Don Hahn.

Release date: March 26, 2010
Running time: 86 minutes
Rating: PG

If you can't attend the special screening --
The film will begin its theatrical run on March 26, 2010 with limited releases in New York, Los Angeles, Chicago, and San Francisco.

In New York, the film will open at the Landmark Sunshine. In Los Angeles, the film will open on two screens, at the AMC Century City and AMC Burbank. In Chicago, the film will open at the AMC River East, and in San Francisco at the Landmark Embarcadero - all on March 26, 2010.

There are also a few more festival showings coming up.
March 31 - Philadelphia, PA (film society)
April 10 - Dallas, TX (film festival)
April 11 - Sarasota, FL (film festival)
April 13 - Atlanta, GA (SCAD screening)
April 14 - Savannah, GA (SCAD screening)

Please help spread the word about the theatrical release via your social network.

Additional Information:
Facebook
Website

Thursday March 11, 2010

Drect3D11 and DirectCompute
by Chas. Boyd

DirectX11 shipped last Fall with Windows7, and now is bringing new levels of graphics performance and technology to applications. Since you can read about those features in several well made web sites, this talk will present the motivation and philosophy for its key elements and how they were designed. Emphasis will be on tessellation and DirectCompute.

Chas. Boyd is a software architect on the Windows graphics team. He joined the Direct3D team in 1995 and has contributed to DX releases since DirectX 3. Over that time he has worked closely with hardware and software developers to drive the support for and use of features like programmable hardware shaders and float pixel processing. He is currently working on broadening the use of GPU/data-parallel processors in both graphics and non-graphics application areas.

Thursday Feburary 18, 2010

Computer Vision Techniques for High-end Visual Effects
by Chris Bregler and Kiran Bhat

The speakers will discuss several ongoing efforts on computer vision based techniques applied to match-moving, camera-based motion capture of people and objects, stereo reconstruction, and other challenges in the VFX pipeline at ILM. This includes the description of a new system called MultiTrack and it's applications on various feature films in production at ILM.

Kiran Bhat is a research engineer at Industrial Light & Magic (ILM), the VFX branch of LucasFilm, San Francisco. He obtained his Ph.D. from the School of Computer Science at Carnegie Mellon University in 2004 and a B.Eng. from BITS, Pilani in 1998. His areas of interest are in graphics/vision algorithms, facial motion capture and animation, physical simulations and robotics.

Thursday January 7, 2010

Mobile Visual Computing
by Kari Pulli and Radek Grzeszczuk

The new generation of smart phones equipped with sensors and fast processors makes a great visual computing platform. In this talk, Kari Pulli, Research Fellow and Radek Grzeszczuk, Principal Scientist at Nokia Research Center in Palo Alto, will present an inside look at some of the mobile visual computing possibilities of the near and long term future. The focus of the talk will be on mobile augmented reality and mobile computational photography covering a variety of topics such as phone-based visual search, real-time visual tracking, and photo editing.

Kari Pulli,
Research Fellow Visual Computing and Ubiquitous Imaging team leader,
Nokia Research Center

Kari's academic and professional life has mostly dealt with computer graphics and computer vision. For his M.Sc. thesis Kari implemented a parallel graphics system on Transputers while visiting University of Paderborn, Germany in 1990-91. He worked as a researcher at University of Oulu, Finland on range vision in 1991-93. Kari's PhD at University of Washington (1993-97) was a mixture of graphics and range vision (surface reconstruction and rendering with Tony DeRose and Linda Shapiro and many others), including simultaneous graphics internships at Microsoft, SGI, Alias|Wavefront. Kari worked as the technical head of the Digital Michelangelo project for Stanford in Palo Alto, Florence, Rome in 1998-99. Since joining Nokia 1999, Kari has been leading lots of graphics research there, and worked on graphics APIs such as OpenGL ES, M3G, and OpenVG for mobile devices. Kari has been Docent (adjunct professor) at University of Oulu since 2001, and taught Computer Graphics there 2000-2004. From June 2004 to August 2006 Kari was a visiting scientist at MIT CSAIL Computer Graphics Group, and moved to NRC Palo Alto in August 2006.

Radek Grzeszczuk,
Principal Scientist Visual Computing and User Interfaces,
Nokia Research Center

Radek Grzeszczuk is a Principal Scientist at Nokia Research Center, Palo Alto. His research focuses on mobile visual search, mobile augmented reality, urban modeling and visual user interfaces. Prior to joining Nokia Corp. in 2006, he was a Senior Researcher in the Architecture Research Lab at Intel Corp. where he worked on analysis and visualization of large image data sets. He also worked on parallel algorithms and performance analysis of applications in image processing, physical simulation, optimization, and machine learning. He received his MS (1994) and PhD (1998) degrees from University of Toronto.

2009 Events:

Thursday Dec 3, 2009

The SIGGRAPH 2009 Electronic Theater
by the Silicon Valley Chapter ACM SIGGRAPH (Special Interest Group in Graphics)

Video presentation of cutting edge computer animation from the 2009 SIGGRAPH conference.

This special screening of the most astounding achievements computer graphics animated shorts was the highlight experience of the 2009 SIGGRAPH computer graphics conference in New Orleans.

Thursday Nov 5, 2009

Unity Technologies - Taking the pain out of game development
by David Helgason

David Helgason will talk about what Unity is to interactive 3D, web 3D, and mobile games. And why democratization of technology, great design, and big developer communities wins. And how Unity's "macro bet" is that over time 3D content is both more powerful and cheaper to produce.

David Helgason has served as CEO of the game technology company Unity Technologies since cofounding it in 2003. Unity Technologies' vision is to democratize game development, and develop technology for the next generation of the industry. From in-browser MMO, through mobile, to social, casual, serious, and beyond. The most amazing thing is that it's working: Unity is used by dozens of big game publishes and media companies, hundreds of smaller studios, and thousands of independent professionals, hobbyists, students, and 14 year old boys. Over 400 schools worldwide use Unity to teach or train students.

Thursday October 15, 2009

Sirikata: Open Source Virtual Worlds
by Daniel Horn, Henrik Bennetsen

Sirikata (www.sirikata.com) is an BSD licensed open source platform for games and virtual worlds. We aim to provide a set of libraries and protocols which can be used to deploy a virtual world, as well as fully featured sample implementations of services for hosting and deploying these worlds. The platform has grown out of a several years of research at Stanford University and the current ambition is to expand into a fully community run open source project. In the talk we will describe the technology and its application and explore some possible roads ahead.

Daniel Horn: Daniel is a PhD candidate in the Stanford Graphics Lab. Daniel has had an interest in shared 3d experiences since starting his open source 3d space simulator Vega Strike in 1998. Currently Daniel is developing an open source BSD licensed virtual world platform, Sirikata, at Stanford. Sirikata is designed to scale and provide a standard for shared 3d experiences including telepresence, games and the next generation of 3d web applications.

Henrik Bennetsen: In his role as associate director of the Stanford Humanities Lab Henrik maintains a strong interest in 3D collaborative spaces and open source technology. He is heading out the Speed Limits research project, a collaboration with the Danish Bornholm's Kunstmuseum to how 3D collaborative technologies may augment traditional cultural institutions. As part of this he is deeply involved in the development of the open source Sirkata platform for the deployment of games and virtual worlds. At the 2009 MiTo music festival he performed in the Mixed Reality Performance: Una serata in Sirikata after leading the development of the enabling technologies. Previously Henrik lead the Lifesquared research project that explored animating traditional archives using new technology. The work was shown at The Museum of Fine Arts in Montreal (2007) as well as SFMOMA (2008). In 2007 he co-founded the Stanford Open Source Lab that has since grown to about 60+ members from across the Stanford community. Henrik is Danish, has a MSc. In Media Technology and games from the IT University of Copenhagen and a BSc. in Medialogy from Aalborg University. Before his return to the world of academia Henrik was a professional musician and still has a strong side interest in creative self expression augmented by technology.

Thursday September 24, 2009

Ray Tracing with the NVIDA OptiX Engine
by Austin Robison

Learn about a new general programming interface for performing ray tracing on NVIDIA GPUs using C for CUDA. This new technology is valuable for anyone who wants to build a high-performance ray tracing renderer (interactive or off-line), accelerate an existing ray tracing renderer, add ray tracing capabilities to raster renderers, or perform generic ray tracing queries for applications such as collision detection. Explore the engine with an API walkthrough as well as example code demonstrating basic rendering with ray tracing and recent hybrid algorithms.

Austin Robison is a Research Scientist at NVIDIA working on the OptiX interactive ray tracing engine. Before joining NVIDIA, he was Chief Developer of Technology at RayScale LLC, a ray tracing startup that was acquired by his current employer. Austin holds a BS in Computer Science from the University of Chicago as well as an MS in Computer Science from the University of Utah.

Here are the meeting slides.

Thursday June 11, 2009

Graphics on the Web: Going Beyond Images and Rectangles
by Vladimir Vukicevic

Over the past few years, browser innovation has proceeded at a fever pitch. As a result, the open web has many capabilities that were not possible even a few years ago, all available in a standards-compliant and cross-platform way. In this presentation, we'll look at some of the recently available capabilities, as well as provide a look at features that are currently being worked on. We'll also examine efforts to get 3D capabilities into the browser, and talk about some of the core improvements in browser technologies that are making this possible.

Vladimir Vukicevic has been involved with the Mozilla project for four years, and is currently the Firefox technical lead. Vlad is interested in bringing new capabilities to the open web platform, especially in the graphics and multimedia areas, to enable the development of rich and engaging web applications. In the past, he's been involved in improving portions of the Firefox UI, optimizing the browser's rendering layer, as well as implementing features such as the HTML5 Canvas in Firefox.

Thursday May 14, 2009

Caustic Real Time Ray Tracing
by James A. McCombe and Alex Kelley

Caustic Graphics is a fabless semiconductor startup with a revolutionary algorithm, implemented in a custom chip design, that massively accelerates pure raytracing. Today, this technology is typically run on a costly render farm of thousands of computers, with images taking hours or even days to produce. This means slow iterations, long production cycles for 3D visualization, and an inability to use production-quality 3D earlier in the design process. Caustic's CausticRT platform, based on their breakthrough chip design, promises decades of ongoing orders of magnitude pure ray-tracing speed gains to change this creative bottleneck.

James McCombe, Caustic Graphics CTO and Founder, will discuss and demonstrate the CausticRT raytracing platform. This system is now shipping to qualified developers, and includes the CausticOne accelerator card, and the CausticGL API. CausticOne achieves a 10-20 times performance gain over current software renderers on a modern 8-core CPU. While CausticTwo, due in early 2010, will be 200 times faster than current software. At that time Caustic Graphics expects several commercial rendering packages to be available that support their technology. Moreover artist and designers for the first time will be able to leverage these phenomenal raytracing performance gains in their production pipeline.

Caustic Website

James A. McCombe - Chief Technical Officer, Founder

James, a native of Belfast, is the technical visionary behind Caustic and one of the company's three founders. Most recently he was the chief architect of Apple's next-generation embedded rasterization algorithms, the basis of the rendering and compositing technology used in the iPhone and iPod. He was also a lead architect for Apple's OpenGL graphics system, and worked with the OpenGL standards committee to create early specifications for programmable shading languages.

Before Apple, James wrote the world's first fully interactive 3D rendering engine and first-person shooter game for the Palm mobile platform. Upon moving to the U.S. in 2000, James worked at Firepad where he continued to develop the mobile rendering technologies that formed the foundation of street mapping solutions available on today's most innovative mobile phones.

Alex Kelley - VP of Worldwide Sales & Marketing

Alex Kelley has over 20-years of sales, marketing, and general management experience in 3D computer graphics. Prior to joining Caustic Graphics, Alex was a Vice President at Autodesk, the third largest software company in the world. There he managed all sales and marketing functions for the Media & Entertainment division in Japan and was responsible for annual sales in excess of $48M. Prior to Autodesk, Alex was a long-time Vice President at Alias, a leading 3D computer graphics software company. He managed all facets of sales, marketing and operations for Asia-Pacific with over 60 multinational staff located across Japan, Korea, China and Singapore.

Alex begin his career in computer graphics as a researcher, and published his seminal work on terrain simulation in the Proceedings for ACM SIGGRAPH Computer Graphics in 1988. After graduate school he joined AT&T Bell Labs and was part of the team that developed the Pixel Machine, one of the first parallel image computers. Alex is fluent in Japanese, and holds a B.S. and M.S. degree in Computer Science from Arizona State University.

Thursday April 16, 2009

Hot Research in Silicon Valley
by Stanford Computer Graphics Lab

Fluid Simulation:

Avi Lev Robinson-Mosher will be presenting a method for obtaining more accurate tangential velocities for solid fluid coupling. This extends and improves their method from last year, which required a mass lumping strategy that did not allow for freely flowing tangential velocities. Similar to that previous work, their method prevents leaking of fluid across a thin shell, however unlike that work their method does not couple the tangential velocities in any fashion, allowing for the proper slip independently on each side of the body. Moreover, since it accurately and directly treats the tangential velocity, it does not rely on grid refinement to obtain a reasonable solution. Therefore, it gives a highly improved result on coarse meshes.

Avi Lev Robinson-Mosher is a fourth year PhD student in Ron Fedkiw's group in the Stanford Computer Science department. He conducts research on physics based simulation for computer graphics, mostly in the area of solid-fluid coupling. Before coming to Stanford, Avi earned his masters in political philosophy at the London School of Economics and Political Science. Before that he was a proud member of Davenport college at Yale, completing a BS in computer science and a BA in physics. In his spare time he consults at ImageMovers Digital, is a member of the Stanford Archery Team, and does a fair amount of social dance.

GRAMPS:

We introduce GRAMPS, a programming model that generalizes concepts from modern real-time graphics pipelines by exposing a model of execution containing both fixed-function and application-programmable processing stages that exchange data via queues. GRAMPS allows the number, type, and connectivity of these processing stages to be defined by software, permitting arbitrary processing pipelines or even processing graphs. Applications achieve high performance using GRAMPS by expressing advanced rendering algorithms as custom pipelines, then using the pipeline as a rendering engine. We describe the design of GRAMPS, then evaluate it by implementing three pipelines-Direct3D, a ray tracer, and a hybridization of the two-and running them on emulations of two different GRAMPS implementations: a traditional GPU-like architecture; and a CPU-like multi-core architecture. In our tests, our GRAMPS schedulers run our pipelines with 500 to 1500 KB of queue usage at their peaks.

Solomon Boulos is a second year PhD student in Stanford University's graphics lab working with Pat Hanrahan. His research interests include parallel systems with a focus on rendering and ray tracing in particular. He also spends one day a week working for Sony Pictures Imageworks turning research into practice.

Thursday March 19, 2009

Photoshop CS4 Extended and 3D
by Zorana Gee - Photoshop Product Manager

3D was first introduced to Photoshop in CS3 Extended. With the latest release of Photoshop CS4, Extended takes a huge leap in improvements to the 3D capabilities. This includes a brand new 3D engine that enables the ability to paint directly on a 3D model, a rich materials editor, and a high-quality Raytracer. This 60 minute presentation will cover many of these new features including an in depth discussion of our Raytracer with Photoshop computer scientists Pete Falco, Nathan Carr and Aravind Krishnaswamy.

Nathan Carr is a researcher in Adobe's Technology Labs working on computer graphics related topics. He joined Adobe in 2006 after completing his Ph.D. from the Department of Computer Science at the University of Illinois Urbana-Champaign under the guidance of John C. Hart . His thesis focused on techniques for mesh parameterization with an emphasis on making them useful for accelerating surface processing algorithms on consumer level graphics hardware. He interned twice with Intel's graphics group in 1999 and 2000, and also with Nvidia in 2002 on their architecture team.

During his career at Adobe, Nathan has published numerous papers covering the areas of rendering, fluid simulation, graphics hardware, and geometry processing. Nathan helped initiate the Adobe Image Foundation project under which the Pixel Bender language was developed. Pixel Bender is a custom language for expressing image and audio processing algorithms that readily maps the expressed computation to parallel processing platforms such as multi-core CPUs or GPUs. Pixel Bender is currently shipping in Flash Player 10 and as a plug-in into Photoshop CS4. More recently, Nathan co-developed the interactive film quality renderer ART that is shipping with Photoshop CS4.

Aravind Krishnaswamy is a graphics researcher in Adobe’s Advanced Technology Labs working on graphics related topics. He joined Adobe in 2005 where he worked on Photoshop CS3. In 2007, he joined the Advanced Technology Labs and helped develop interactive ray tracing technology which shipped in Photoshop CS4. Some of his current research interests include: simulation of natural phenomena, biophysically-based rendering of organic materials, practical global illumination, and parallel computing. Prior to joining Adobe, he spent 6 years at Inscriber (now Harris Broadcast Systems) developing animation and broadcast television software.

Aravind received his BMath and MMath in Computer Science from the University of Waterloo. His thesis addressed the simulation of light interaction with human skin. The results of his research have been presented in several publications and he has given tutorials on the subject at several conferences including SIBGRAPI, EUROGRAPHICS, AFRIGRAPH and SIGGRAPH Asia.

Pete Falco is a Sr. Computer Scientist for Adobe Photoshop. Pete has been on the Photoshop team since 2005 and is focused on 3D and technology transfer for Photoshop. Prior to joining Adobe, Pete worked as an engineer on QuickTime VR at Apple, as the Director of Engineering at Live Picture and co-founded Zoomify. He holds a BS and ME from Rensselaer Polytechnic Institute.

Zorana Gee is a Product Manager on the Photoshop team. She holds an MBA from Leavey School of Business at Santa Clara University. Zorana has been on the Photoshop team for over 9 years and has been involved with Photoshop Extended from the beginning. She has been instrumental in the 3D effort and has a deep understanding of the product. Zorana continues to help drive the implementation of 3D tools not only within Photoshop but also in the Adobe Creative Suite solution. Outside of Adobe, her time is often spent teaching the art of Capoeira to her community. She has been training Capoiera for over 11 years and holds a black-belt (equivalent).

Thursday Feb 19, 2009

Cooliris - Think beyond the browser
by Austin Shoemaker

Cooliris is developing a powerful new way to search, discover, and consume rich media on the Web. The product integrates with your Web browser and leverages the GPU to deliver a highly performant and visually stunning user interface. We will discuss the fundamental architecture of the client, also summarizing the challenges and lessons learned while extending the product to a wide variety of hardware configurations on the Mac, Windows, and other platforms.

Cooliris Website

Austin Shoemaker is co-founder and CTO at Cooliris, working to build a next-generation visual experience for the Web. Austin stopped out of the CS Master's program at Stanford in 2007 to pursue this goal. Prior to Cooliris, he was a software engineer at Apple on the Mac OS X, iPhoto, iMovie, and Spotlight teams, and started as the youngest intern in company history. As an undergraduate, he competed on the rowing team and won Stanford's first rowing national championship in 2005.

Thursday Jan 22, 2009

The SIGGRAPH 2008 Electronic Theater
by the Silicon Valley Chapter ACM SIGGRAPH (Special Interest Group in Graphics)

Video presentation of cutting edge computer animation from the 2008 SIGGRAPH conference.

This special screening of the most astounding achievements computer graphics animated shorts from last year was the highlight experience of the 2008 SIGGRAPH computer graphics conference in Los Angeles.

SIGGRAPH 2008

2008 Events:

Thursday Dec 11, 2008

Live & Real Time Graphic Effects on Air
by Louis Gentry

Sportvision is the nation's premier innovator of sports and entertainment products for fans, media companies and marketers.

  • Eight Emmy Awards, including three for its signature broadcast enhancements, the virtual yellow 1st and TenTM Line and KZoneTM, and three for its pioneering advanced media work with NASCAR
  • 2000+ live events since 1998
  • Invented over half of all the Technological Advancements in Sports Television (Sports Business Journal, 2002)
  • Products that have enhanced the world's most prominent sporting events, including the Super Bowl, the Summer and Winter Olympic Games, Daytona 500, World Series, Wimbledon, NBA Finals, U.S. Open Golf Championship, British Open, NCAA Final Four and the entire NCAA Bowl Championship Series

The presentation included several off air video clips, explanation of the underlying technology with emphasis on the demands of live, real time graphic effects on air.

Sportvision website

Louis Gentry is a senior managing engineer at Sportvision and has been with the company for almost five years. His software development in Sportvision's core technologies drive many of today's current suite of broadcast effects including 1st and Ten, PitchFX, and ESPN's Player Tracker. Louis leads a team of engineers in developing products as well as enhancing the capabilities of the company's core rendering technologies.

Prior to Sportvision, Louis worked for Pinnacle Systems developing a streaming DVD engine for the company's consumer product line. He got his start in computer graphics while working for Silicon Graphics Inc., now SGI, where he developed OpenGL applications for the Windows team.

He received a B.S. in Computer Science from Washington University in St. Louis.

Thursday Nov 13, 2008

FX for Wall-e and Pixar
by David MacCarthy

David MacCarthy, the Effects Supervisor for Wall-e, will give an overview of the use of effects in Wall-e, what new technologies were developed for the film, and how effects are used at Pixar.

David MacCarthy has been with Pixar Animation Studios since January 2001. MacCarthy has worked as a Technical Director on a number of Pixar’s feature films including Monsters, Inc., Finding Nemo, The Incredibles, Cars, and as Effects Supervisor on Wall-e. MacCarthy is originally from Ireland, moving to the United States in 1987. He attended the School of the Art Institute in Chicago, after which, he spent several years working in post production and games before joining Pixar.

Thursday Oct 16, 2008

Larrabee
by Nola Donato & Stephen H Hunt

Larrabee is a many-core visual computing architecture that greatly increases flexibility and programmability compared to standard GPUs. It uses multiple in-order x86 CPU cores augmented by a wide vector processor unit to provide dramatically higher performance per watt and per unit of area than out-of-order CPUs on highly parallel workloads. A coherent on-die 2nd level cache allows efficient inter-processor communication and high bandwidth local data access by CPU cores.

Task scheduling is performed entirely with software in Larrabee, rather than in fixed function logic. The customizable software graphics rendering pipeline for this architecture uses binning in order to reduce required memory bandwidth, minimize lock contention, and increase opportunities for parallelism relative to standard GPUs. The Larrabee native programming model supports a variety of highly parallel applications that use irregular data structures. Performance analysis on those applications demonstrates Larrabee’s potential for a broad range of parallel computation.

Nola is a graphics architect at Intel working on the Larrabee project. She has worked on graphics since she was a graduate student in the Electronic Visualization Lab at University of Illinois where her research was used as the basis for a game console. Nola has worked on graphics software for top selling applications like Microsoft PowerPoint and Adobe Creative Suite. She has also designed game engines and tools for Sun, Mattel, Bally, 3DO and Silicon Graphics. Nola first became interested in parallel graphics as a researcher in the Intel Microprocessor Graphics Lab where she led a team which developed a scene manager for a distributed cluster. Now she is looking at how to keep many Larrabees really busy.

Steve is principal engineer at Intel currently working on the Larabee System Architecture. He graduated from University of Pennsylvania BSEE 1982 and joined Intel working on the 8051 microcontroller product development team, and held a variety of roles, mostly as a design engineer uC51 core plus cells custom ASIC, CAD tool developer (logic simulation, synthesis, DFT) micro-architect (Pentium, Pentium II), and researcher in computer human interface, large scale displays, advanced workloads, and parallel/throughput computing architectures. Recently, he led the micro-architecture and logic design of enterprise server chip and began work on Larabee in 2006 where is responsible for the system architecture, global features, and has been heavily involved in mapping out the future Larabee product roadmap.

Intel Developer Forum

Thursday Sept 18, 2008

Using the GPU to do Video Decoding, Encoding and Transcoding
by Mike Schmit

CPU clock speeds have stopped getting dramatic clock speed increases. Performance gains are now coming from adding more and more CPU cores. GPUs have been adding shader cores for many years and can already handle many threads. How do you go about programming a large number of cores with a massive number of threads to do video processing, such as H.264 decoding and encoding?

Mike Schmit is a software engineering manager at Advanced Micro Devices, managing the Digital Video Software team and Stream Computing SDK. He's worked on optimizing video encoding, decoding and transcoding since 1995 when he helped develop the first software DVD player for the PC. He's written books on software optimization, started his own software company, taught computer architecture, programming and software engineering project management classes and served on the Board of Directors and as President of the Software Entrepreneur's Forum. He's currently works on parallelizing video encoders and transcoders as well as other audio and video related applications.

Thursday June 19, 2008

Mathematical Art - The Beauty of Numbers
A retrospective on 30 years of personal experiments using basic math and simple software to create images
by Bruce Puckett

The Early Years
-Big Iron, Plotters, Storage Tubes, and the Hidden Line Algorithm

Life Becomes Chaotic
-Personal Computers, but no memory
-Feedback Systems and Chaos
-Discrete and Continuous Maps
-Cellular Autonoma
-But Is It Art?

Modeling and Rendering the Hard Way
-The Magic Matrix
-Vectors Point the Way
-Higher Dimensions
-Riding the Turtle

Polyhedra Are Fun
-Inversions and Exversions
-Extrusions
-Twists and Turns

Do It Yourself Dynamics
-The Color Circle, and It's Uses
-Vector Fields and Force Fields

Projections
-Convergence

Bruce Puckett graduated from University of Washington with a Chemistry degree. Computer graphics was in it's early stages, but he did manage to create some early math art images while there. While in school, Bruce worked for NOAA, turning fur seal data into density maps.  He has always had two sources of inspiration: exploring and visualizing mathematics with the help of custom code, and wilderness landscape photography. What do these two passions have in common? They are both about visual geometry, whether natural or constructed. Bruce next worked for Boeing Computers Systems doing programming support for CAD systems that were being used to design the next generation of airplanes. He attended his first Siggraph Conference in Seattle, in 1980, and displayed some plotter drawings in the art show. After Boeing, he was drawn to Silicon Valley, and joined the Fairchild Research and Development Lab, where he supported CAD applications, and created a prototype circuit layout application. Bruce then changed directions, and became a teacher at Verde Valley School, teaching programming, chemistry, and Earth Sciences. He continued his learning, and then teaching, at Foothill College. He has taught numerous programming classes, using various languages over the years, but his favorite subject to teach remains the 3D Modeling and Animation class.

Currently, he pursues his interests in digital visualization as an Independent Academic by writing and posting small papers and code examples on his web site, for all to use. In recent times he has become worried about the increasing divide between the students trying to enter the field, and the overwhelmingly technical nature of what computer graphics has become. He hopes to help counter the divide by showing how people can have fun and learn by combining just a bit of custom code with just a bit of mathematical geometry, using the great diversity of software that is available.

Thursday May 15, 2008

Two-way Coupling of Rigid and Deformable Bodies
by Tamar Shinar

We propose a framework for the full two-way coupling of rigid and deformable bodies which is achieved with both a unified time integration scheme as well as individual two-way coupled algorithms at each point of that scheme. As our algorithm is two-way coupled in every fashion, we do not require ad hoc methods for dealing with stability issues or interleaving parts of the simulation. We maintain the ability to treat the key desirable aspects of rigid bodies (e.g. contact, collision, stacking, and friction) and deformable bodies (e.g. arbitrary constitutive models, thin shells, and self-collisions). In addition, our simulation framework supports more advanced features such as proportional derivative controlled articulation between rigid bodies. This not only allows for the robust simulation of a number of new phenomena, but also directly lends itself to the design of deformable creatures with proportional derivative controlled articulated rigid skeletons that interact in a life-like way with their environment.

Tamar Shinar is a Ph.D. candidate at the Institute for Computational and Mathematical Engineering at Stanford University working with Prof. Ronald Fedkiw. Her research focuses on the development of new computational algorithms for physically based simulation, with applications in computational fluid dynamics, solid mechanics and computer graphics. In particular, she has worked on level set based multiphase fluid simulation, coupled rigid/deformable solid simulation, and coupled solid/fluid simulation. She plans to pursue a postdoctoral fellowship at the Courant Institute at NYU in the fall.

URL: Tamar Shinar's web page

Thursday April 10, 2008

The Lightspeed Automatic Interactive Lighting Preview System
by Doug Epps

We present an automated approach for high-quality preview of feature-film rendering during lighting design. By leveraging large portions of the existing final-renderer, we are able to cache light-independent data and interactively compute light-dependent data then re-sample onto the screen, including motion-blur and transparency.

Doug Epps is Director or R&D at ImagemoversDigital, a production studio based in Marin County devoted to the performance-capture films of Robert Zemeckis. Doug's first job was as a Digital Input Device operator at Tippett Studio working on Jurassic Park. Doug worked at Tippett Studio for more than 10 years on projects ranging from "Coneheads" to "Starship Troopers" to "The Matrix" sequels. He has worked in the RenderMan group at Pixar as well as at Exluna on the Entropy renderer.

URL: The Lightspeed Automatic Interactive Lighting Preview System

Thursday March 27, 2008

Body, Space and Cinema
by Scott Snibbe

Scott Snibbe presented interactive works that incorporate reactive video projections, large-scale tracking of humans and vehicles, and his recent work Blow Up which amplifies human breath as a large field of wind. He discussed the philosophical divide between language and visceral perception that motivates his creation of interactive media art. Working with technologies at the forefront of contemporary research including computer vision and synthetic touch, Snibbe explores how a minimal intrusion of technology can provide insight into the nature of observer's minds and their sense of self. Works shown ranged from large-scale body-centric physical installations to interactive sculpture and screen- and web-based interactive graphics.

URLs: www.snibbe.com / www.snibbeinteractive.com / www.sonaresearch.org

Scott Snibbe creates immersive interactive art that evokes powerful emotional and social engagement from viewers. His works are known for their positive social effects: fostering a sense of interdependence, promoting social interaction among strangers, and increasing viewers’ concentration. His artworks have been installed in over one hundred art museums, performance spaces, science museums and public spaces worldwide since 1995 including the Whitney Museum of American Art (New York); the InterCommunications Center (Tokyo); Ars Electronica (Austria); and the Institute of Contemporary Arts (London), Science Museum (London); the Exploratorium (San Francisco), the Phaeno Science Center (Germany); and the Cité de Science (Paris, France). He has been awarded a variety of international prizes, including the Prix Ars Electronica and a Rockefeller New Media Fellowship. He is the founder of two companies: Snibbe Interactive, Inc., which sells and distributes interactive installations for public spaces; and Sona Research, which engages in educational and cultural research.

Snibbe was born in 1969 in New York City. He holds Bachelor’s degrees in Computer Science and Fine Art, and a Master’s in Computer Science from Brown University. Snibbe studied experimental animation at the Rhode Island School of Design and his films have been widely shown internationally. He has taught media art and experimental film at Brown University, The San Francisco Art Institute, the California Institute of the Arts, The Rhode Island School of Design and UC Berkeley. Snibbe worked at Adobe Systems as a Computer Scientist where he made substantial contributions to the special effects software Adobe After Effects and research projects at Adobe Research. Snibbe held research positions at Interval Research where he performed basic research in haptics, computer vision and interactive cinema. Snibbe’s research is documented in a number of academic papers, and over a dozen patents.

Thursday Feburary 28, 2008
The SIGGRAPH 2007 Electronic Theater
by the Silicon Valley Chapter ACM SIGGRAPH (Special Interest Group in Graphics)

Video presentation of cutting edge computer animation from the 2007 SIGGRAPH conference.

This special screening of the most astounding achievements computer graphics animated shorts from last year was the highlight experience of the 2007 SIGGRAPH computer graphics conference in San Diego. This showing was projected in high-definition and will include all materials shown in the electronic theater at SIGGRAPH in San Diego.

2007 Events:

Friday December 14, 2007

Dolby - An Evening of 3D Digital Cinema

The Silicon Valley & San Francisco ACM SIGGRAPH chapters announce a joint meeting to see first hand the state of the art in 3D Digital Cinema.This discussion as well as an actual demonstation using demo trailers and clips with a full Dolby-3D theater setup will be begin at 7:00 PM on December 14th at the Dolby Presentation Room, 3rd Floor, 100 Potrero Avenue, San Francisco, CA. Attendies are advised to arrive at 6:15-6:30pm for check-in.

Seating is limited and all attendies MUST pre-register on Activa and bring a printout as proof of their registration to gain admission. http://www.acteva.com/booking.cfm?bevaID=148352

We look forward to and are thankfull for this rare and gracious invitation from John Gilbert @ Dolby.

Please refer to the following press release for details on this remarkable developement in Cinema projection.

Dolby 3D Digital Cinema Expands Global Presence Bringing High-Quality 3D Experiences to Theatres Worldwide Exhibitors in Over 12 Countries Deploy Dolby 3D Digital Cinema for Paramount Pictures' Upcoming Beowulf Release.

SAN FRANCISCO, Nov 15, 2007 (BUSINESS WIRE) -- Dolby Laboratories, Inc. (NYSE:DLB) announced today that its Dolby(R) 3D Digital Cinema system will be available in 75 screens in 12 countries worldwide in time for the upcoming release of Paramount Pictures' Beowulf, premiering November 16. By securing deals wit h dozens of exhibitors in Asia, Europe, and the United States, Dolby is revolutionizing theatrical experiences with its high-quality digital 3D solution. Dolby will continue installing additional screens during Beowulf's two-week global opening.

"In a short time frame, the team executed an aggressive deployment plan to install Dolby 3D systems in theatres around the world for Beowulf, as we wanted to fulfill as many requests as possible from our valued customers," said John Iles, Vice President, Cinema, Dolby Laboratories. "With an unwavering commitment to a better 3D experience, we are confident that Dolby 3D will provide an exceptional presentation of Beowulf and other upcoming digital 3D movies."

The Dolby 3D Digital Cinema solution brings high-quality 3D to every seat in a theatre: -- The ability for exhibitors to play back 3D content on a standard white screen provides moviegoers with an even image across the entire screen minus any hot spots or inconsiste nt light reflection. -- Dolby 3D full-spectrum color-filter technology provides amazing color fidelity, delivering clear 3D images with realistic color. -- Dolby's color filter technology also maintains premium picture quality because the filter wheel is inserted into the light path before the image is formed, delivering a stable and sharp picture.

The result is crystal clear images and vivid colors that pop off the screen with an amazing sense of depth.

"We are thrilled with the quality of Dolby 3D Digital Cinema and excited to show Beowulf in Dolby 3D," said Mike Thomson, Vice President Operations and Technology, Malco Theatres. "We will have the ability to play back 3D content on our big screen at the Malco Paradiso using Dolby 3D Digital Cinema. The large screen creates a 3D experience unlike anything we've been able to offer our patrons before."

"Marcus Theatres recently debuted Dolby 3D Digital Cinema and our patrons have been very pleased w ith the presentation quality," said Bruce Olson, President, Marcus Theatres. "Marcus strives to deliver the best moviegoing experience possible and we believe Dolby 3D reinforces that commitment."

"Dolby is a trusted brand for providing technologies that dramatically improve the cinematic experience, as we have seen with Dolby Digital Cinema," said Joost Bert, CEO, Kinepolis Group. "Kinepolis recently debuted Dolby 3D Digital Cinema at our newest cinema complex in Ostend, Belgium, and our patrons were very impressed with the sharp, clear, and bright images that seem to jump off the screen."

To date, exhibitors using Dolby 3D technology have presented Walt Disney Pictures' Meet the Robinsons and Tim Burton's The Nightmare Before Christmas 3D and in addition to Beowulf (Paramount Pictures/ Warner Bros.), are expected to show the upcoming 3D presentations of Fly Me to the Moon (nWave Pictures), Hannah Montana (Disney) and Journey 3D (New Line). For a complete list of Dol by Digital Cinema and Dolby 3D Digital Cinema locations, please visit www.dolby.com/consumer/motion_picture/ddcinemas/.

 

Thursday December 13, 2007

Fast Light - Creating a Light Field Display
by Ian McDowall

Projectors capable of doing thousands of frames per second are now possible and with the latest graphics cards we can feed them. The most interesting application of this technology so far will be the focus of this presentation - the 360 Degree Light Field Display shown at Siggraph's Emerging Technologies. The presentation will describe the system implementation and rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display. The display consists of a high-speed video projector, a spinning mirror covered by a holographic diffuser, and FPGA circuitry to decode specially rendered DVI video signals. The display uses a standard programmable graphics card to render over 5,000 images per second of interactive 3D graphics, projecting 360-degree views with 1.25 degree separation up to 20 updates per second. We describe the system's projection geometry and its calibration process, and we present a multiple-center-of-projection rendering technique for creating perspective-correct images from arbitrary viewpoints around the display. Our projection technique allows correct vertical perspective and parallax to be rendered for any height and distance when these parameters are known, and we demonstrate this effect with interactive raster graphics using a tracking system to measure the viewer's height and distance. We further apply our projection technique to the display of photographed light fields with accurate horizontal and vertical parallax. We conclude with a discussion of the display's visual accommodation performance and discuss techniques for displaying color imagery. The presentation will include video of the system and a demonstration of the fast projector although not the entire light field display.

Ian McDowall is a systems designer and CEO of Fakespace Labs, a company he co-founded in the early 1990's with two partners. Ian was part of a small team that developed a stereoscopic graphics computer for the NASA Ames Virtual Environment Workstation project in 1988. Since then Ian has implemented a variety of pioneering wide field of view and immersive displays at Fakespace Labs. Ian managed the development of Fakespace’s software which integrated various displays and IO devices. With a degree in Systems Design from the University of Waterloo, Ian brings together systems involving hardware, software, mechanical design, and optics. He is the inventor on a number of US patents and is the co-chair of the SPIE Engineering Reality of Virtual Reality conference. He has been a visiting scholar at Stanford and regularly collaborates on demonstrations at Siggraph’s Emerging Technologies.

Thursday November 15, 2007

3D Medical Images: A Nascent Market No More And Why it is Time for a Standard
by Michael Aratow

With the advent of increasingly sophisticated imaging modalities in medicine, fascinating 3D views of the human body are possible. Fueled by the aging global population, these images are being used more frequently in everyday medical tasks for diagnosis and to guide therapeutic interventions. In the past, images like this could only be viewed on specialized workstations housed in the radiology suite, but now they are appearing on desktop computers and even laptops, as Moore’s law drives the power of GPU’s ever higher. This shift in accessibility of 3D medical images, coupled with their growing volume, has significant implications for interoperability and is driving a whole new set of applications. A standard for these images is vital to promote patient care and innovation in medical markets.

Michael Aratow is a Board Certified Emergency Physician with over 20 years of clinical experience in emergency medicine. He currently is Vice-Chair and Director of Quality Assurance in the Department of Emergency Medicine and Chief Medical Information Officer of San Mateo Medical Center, an integrated healthcare system consisting of a public hospital and community clinics serving San Mateo County. Dr. Aratow has 4 years of past research experience in various areas including orthopedics, basic physiology (with numerous publications) and virtual reality, the latter two areas being conducted at NASA/Ames Research Center. His passion with 3D lead him to develop a patent in 3D visualization of aviation weather and terrain data, and he now sits on the Board of Directors of the Web3D Consortium and co-chairs its Medical Working Group. He currently is the Principal Investigator in a contract with the Telemedicine and Advanced Technology Research Center, part of the U.S. Army Medical Research and Materiel Command, to assist in development of an open standard for 3D medical images. He enjoys spending his spare time with his wife, 2 daughters and son, assisting in orthopedic surgery and PC gaming.

Thursday October 18, 2007

Interactive Drama: High Agency Interactive Storytelling
by Michael Mateas

High-agency interactive story, in which the player can have a real and complex effect on both the inner lives of autonomous characters and the evolution of the plot, is one of the holy grails of interactive art and entertainment. Unfortunately, attempts to create interactive stories have primarily involved design-only solutions using standard technologies such as finite-state-machines and simple story graphs, resulting in experiences that inevitably trade-off agency and story structure. The consistent failure to combine agency and story has even prompted some designers and theorists to conclude that interactivity and story are fundamentally opposed. Façade, a first-person, real-time, one-act interactive drama (available for free download at www.interactivestory.net), is our attempt to constructively explore the design space of high-agency interactive story. In this talk we describe the process of building Façade, a process that combined three simultaneous and related research and design thrusts: designing ways to deconstruct a dramatic narrative into a hierarchy of story and behavior pieces; engineering an AI system that responds to and integrates the player's moment-by-moment interactions to reconstruct a real-time dramatic performance from those pieces; and understanding how to write an engaging, compelling story within this new organizational framework. We provide an overview of the process of bringing our interactive drama to life as a coherent, engaging, high agency experience, including the design and programming of thousands of joint dialog behaviors in the reactive planning language ABL, and their higher-level organization into a collection of story beats sequenced by a drama manager. We describe the iterative development of the architecture, its languages, authorial idioms, and varieties of story content structures, and how these content structures are designed to intermix to offer players a high-degree of responsiveness and narrative agency. We conclude with design and implementation lessons learned as well as describe current and future research and commercial directions.

Michael Mateas' research in AI-based art and entertainment combines science, engineering and design into an integrated practice that pushes the boundaries of the conceivable and possible in games and other interactive art forms. He is currently a faculty member in the Computer Science department at UC Santa Cruz, where he is involved in launching UCSC's game design degree, the first such degree offered in the University of California system. Prior to Santa Cruz, Michael was a faculty member at The Georgia Institute of Technology, where he held a joint appointment in the College of Computing and the School of Literature, Communication and Culture, founded the Experimental Game Lab, and helped create Georgia Tech's game design degree. With Andrew Stern, Michael released Façade, the world's first AI-based interactive drama in July 2005. Façade has received numerous awards, including top honors at the Slamdance independent game festival (co-located with the Sundance film festival). Michael's current research interests include game AI, particularly character and story AI, ambient intelligence supporting non-task-based social experiences, and dynamic game generation. In addition to frequent paper presentations at AI, HCI and digital media conferences, Michael has exhibited artwork internationally, including venues such as SIGGRAPH, the New York Digital Salon, ISEA, the Carnegie Museum, the Beall Center and Te PaPa, the national museum of New Zealand. Michael received his BS in Engineering Physics from the University of the Pacific, his MS in Computer Science (emphasis in Human-Computer Interaction) from Portland State University, and his Ph.D. in Computer Science from Carnegie Mellon University.

Thursday September 20, 2007

25 Years of Tools and Techniques: Divergence and Convergence
by Hank Grebe

Tools and Techniques Topics Covered:

. From Early Paint systems to Photoshop, importance of pen tablets

. 2D "Morphing" origins

. Vector graphics, and how tweening led to Flash

. 3D software tool evolution

. Exposure sheets and Keyframes - traditional animation origins and metaphors

. Compositing - Alpha channels, scripts behind the interface

. Rotoscoping

In this talk and presentation, Hank will review his career in animation and interactive computer graphics since the mid-1970's, and share observations and anecdotes about the growth of technology in computer graphics, and show video clips of his work at NYIT Computer Graphics Lab and at PDI/Dreamworks.

Subjects covered:

1) Traditional animation. Working with Stephen Lisberger leading up to his writing and directing TRON.

2) NYIT Computer Graphics Lab. Stories behind the development of the first 2D paint systems, 2D tweening, 3D keyframing, and early flexibly jointed characters, such as Gumby.

3) Early work with interactive multimedia interfaces, CD-ROMs and interactive TV at Time Warner Interactive.

4) Shrek 2 work at PDI/DreamWorks

5) Freelancing, contracting and entrepreneurial ventures.

Hank Grebe has been using computer graphics techniques to create art and animation for over 25 years and attended his first SIGGRAPH in 1983. He currently works at Mobile Greetings in Walnut Creek, designing interactive cell phone applications running on Verizon's wireless services.

Hank led a team of digital painters and motion graphics artists on PDI/DreamWorks Animation's feature, SHREK 2. He has provided computer graphics and video technical direction at Time Warner, AT&T, Elektra Records, Merrill Lynch, Intel, and numerous agency clients, video production and post production facilities.

In 1995 Grebe founded Media Spin, a computer graphics consulting business, which he dissolved in 2003. Hank continues to update the web site, mediaspin.com, with blogging and new art projects.

Grounded in traditional painting and cel animation, Hank pioneered computer animation at NYIT's Computer Graphics Lab by rigging and animating one of the first flexibly jointed 3D characters, a 3D Gumby shown at SIGGRAPH's Electronic Theater in 1984 and 85.

August 5 - August 9, 2007
ACM SIGGRAPH Conference in San Diego California

Daniel Lingafelter created the article "Attending SIGGRAPH 2007" for the Silicon Valley SIGGRAPH Chapter

Here is a link to the article:
http://silicon-valley.siggraph.org/MeetingNotes/siggraph2007.htm

Thursday June 21, 2007

New Features of Adobe Photoshop CS3
by Ashley Still and Pete Falco

Ashley Still and Pete Falco of Adobe will give an overview of some of the new features in Photoshop CS3 Extended, including movie paint, 3D, and automatic alignment and blending of multiple images. In addition to demonstrating these new features, they will provide an overview of the Photoshop 3D Plug-in SDK that can be used to extend the current capabilities. There will be ample time for Q&A.

Ashley Still is currently Sr. Product Manager for Adobe Photoshop. Ashley has been on the Photoshop team since 2004 and is focused on new markets and advanced technologies for Photoshop. Prior to joining Adobe, Ashley worked with an Entrepreneur in Residence at Sutter Hill Ventures developing and evaluating business plans and at eCircles.com, one of the first online sites offering photo-sharing and editing. She holds a BA from Yale University and an MBA from Stanford Graduate School of Business.

Pete Falco is currently Sr. Computer Scientist for Adobe Photoshop. Pete has been on the Photoshop team since 2005 and is focused on 3D and technology transfer for Photoshop. Prior to joining Adobe, Pete worked as an engineer on QuickTime VR at Apple, as the Director of Engineering at Live Picture and co-founded Zoomify. He holds a BS and ME from Rensselaer Polytechnic Institute.

Thursday May 17, 2007
Fluid Animation with Dynamic Meshes
by Bryan Klingner

We present a method for animating fluid with unstructured tetrahedral meshes that change at each time step. We demonstrate that meshes that conform well to changing boundaries and that focus computation in the visually important parts of the domain can be generated quickly and reliably using existing techniques. We also describe a new approach to two-way coupling of fluid and rigid bodies that, while general, benefits from remeshing. Several examples of resulting fluid animation are presented. Overall, our method provides a flexible environment for creating complex scenes involving fluid animation.

Bryan Klingner is a Ph.D. candidate in computer science at The University of California, Berkeley. He works on topics in computer animation and computational geometry with professors James O'Brien and Jonathan Shewchuk.

Bryan Klingner's web site (26Mb)

Presentation slides (26Mb)

Thursday April 26, 2007
SketchUp 6: Behind the Wheel and Under the Hood
by Brian Brown and Aidan Chopra, of Google

Brian Brown and Aidan Chopra of Google will give a wide-ranging overview of Google SketchUp, a free, cross-platform application for creating and presenting 3D models. In addition to demonstrating SketchUp and its integration with other programs such as Google Earth and the Google 3D Warehouse, Brian and Aidan will provide insight into the development of some of the new features of SketchUp 6; namely, Styles and LayOut. Ample time for Q&A will be left for those with specific questions.

Aidan Chopra is the Product Evangelist for SketchUp at Google. He lives and works in Boulder, Colorado, and his background is in architecture, illustration and graphic design. He has just completed work on Google SketchUp for Dummies, which is being published by Wiley, and which will be available in late June of this year.

Brian Brown is the Tech Lead for LayOut at Google. He works in Boulder, Colorado, and his background is in architectural engineering, lighting and optical design. He was part of the development effort for LayOut (beta) that was released in January of this year.

Monday March 12, 2007
DirectX 10
by Chas Boyd

Even though the DirectX 10 API and hardware have just shipped, graphics technology continues to advance. This talk outlines the next steps, beginning with the improvements in DirectX 10.1 targeting increased flexibility and image quality. We'll cover technologies currently under investigation for future releases including tessellating subdivision surfaces, generating compressed textures, and continuing improvements in CPU/GPU interoperation. This is a great opportunity to see where your most needed new features are on the list!

Chas Boyd is a software architect at Microsoft. Chas joined the Direct3D team in 1995 and has contributed to releases since DirectX 3. Over that time he has worked closely with hardware and software developers to drive the adoption of features like programmable hardware shaders and float pixel processing. He has developed and demonstrated initial hardware-accelerated versions of techniques like hardware soft skinning, and hemispheric lighting with ambient occlusion. He is currently working on the design of future DirectX releases and related components.

Thursday February 15, 2007
Luxology modo 202...and Beyond
by Brad Peebler, President and a Co-Founder of Luxology, LLC creators of modo 3D software

Luxology will present on the 3D content creation tool, "modo", and what makes it unique. With a significant focus on workflow, Luxology has designed a flexible application interface that allows deep user customization without the need for intense scripting or additional programming. Additionally Luxology will discuss how it has improved artist flow with the unique fusion of various practices such as modeling, painting and rendering. Rather than partitioning these technologies into their own regimented areas, Luxology blends them together so that each can be leveraged regardless of the stage in the pipeline.

Brad Peebler is the President and a Co-Founder of Luxology, LLC creators of modo. Peebler has worked in 3D for over 15 years and has held a variety of roles from technical support on up. With an intense passion for content creation and technology Peebler has always enjoyed a close working relationship with engineers and artists alike. As the son of two teachers, education and training is an area of deep involvement for Peebler and has had a significant impact on the way Luxology develops software and its business.

Thursday January 18, 2007
SIGGRAPH 2006 Electronic Theater
by the Silicon Valley Chapter ACM SIGGRAPH (Special Interest Group in Graphics)

Video presentation of cutting edge computer animation from the 2006 SIGGRAPH conference.

This special screening of the most astounding achievements computer graphics animated shorts from last year was the highlight experience of the 2006 SIGGRAPH computer graphics conference in Boston.

2006 Events:

Thursday December 7, 2006
NVIDIA GeForce 8800 Architecture: Stream Processing for Graphics and Computing
by Henry Moreton and Ian Buck, NVIDIA

This presentation by an NVIDIA architect gives a technical overview of the "G80" hardware architecture and covers the unified pipeline which enables geometry, vertex and pixel shading. This architecture provides extremely high performance, load-balancing, efficient GPU power utilization, and significant improvement to the GPU architectural efficiency. The scalar stream processing architecture also enables new avenues for programmability and thread computing accessed via CUDA, a standard C language interface to the GPU that is targeted at data intensive processing.

Henry Moreton joined NVIDIA in the fall of 1998 as a member of the architecture group. From 1984 to 1988, he worked at Silicon Graphics. In 1992 he received a Ph.D. from the University of California, Berkeley. He has published in the areas of curve and surface modeling, rendering, texture mapping, video and image compression, and unmanned submarine control. He has multiple patents in the areas of optics, video compression, graphics, system and CPU architecture, and curve & surface modeling and rendering. Current interests include the evolution of graphics programming models and, API design and hardware architecture of highly parallel programmable devices.

Ian Buck completed his Ph.D. at the Stanford Graphics Lab in 2004. His thesis was titled "Stream Computing on Graphics Hardware," researching programming models and computing strategies for using graphics hardware as a general purpose computing platform. His work included developing the "Brook" software toolchain for abstracting the GPU as a general purpose streaming coprocessor. He currently works for NVIDIA as the GPU-Compute software manager.

Thursday November 16, 2006
Why Ray Tracing is Doomed
by Alexander Reshetov, Ph.D.

From a hardware perspective, two trends define the increasing importance of the ray tracing research today: emergence of consumer-level multicore architectures and expanding functionality of modern GPUs. Even though ray tracing was long recognized as an embarrassingly parallel application, this had a little practical use on a single core CPU. GPUs, from the other hand, allowed parallel execution from the very beginning, but their limited programmability made it rather cumbersome and ineffective. The Situation is rather different now.

All this is augmented by the development of the new very effective software algorithms. This could have heralded a long anticipated transition from traditional rasterization pipeline to ray tracing, except for one little thing: essentially all the progress in ray tracing lately was achieved by pursuing the SIMD friendly approaches. Accordingly, the general flexibility, which characterized the first ray tracing algorithms, was lost. At the same time, propped by the persistent progress of the GPUs, applications based on the traditional rasterization pipeline are getting a second breath.

I will try to make a sense out of all of this and will talk about the current state of the art in the real time ray tracing and beyond. I will also describe the existing problems and hardware features, which would make RT applications more effective.

About the speaker...

Alexander Reshetov received his Ph.D. degree from Keldysh Institute for Applied Mathematics (in Russia). He joined Intel Corporation in February 1997 as a senior staff researcher after working for two years at the Super-Conducting Super-Collider Laboratory in Texas, where he was designing control system for the accelerator.

His research interests span 3D graphics algorithms and applications, and physically based simulation. His recent work focuses on efficient ray tracing algorithms and data structures. He is best known for his work on multi-level ray tracing algorithm, presented at Siggraph 2005, which is recognized as the fastest real time ray tracing algorithm in the world.

Monday October 16, 2006
Pixar Animation Studios: The making of 'Cars'
by Pixar Animation Studios

San Francisco ACM SIGGRAPH and Silicon Valley ACM SIGGRAPH are proud to present this special event "Pixar Animation Studios: The making of 'Cars'"

Academy of Art University Theatre
Morgan Auditorium, Post & Mason
491 Post Street
San Francisco, CA. 94102
October 16th, 2006
Reception 6:30 p.m.
Event 7:00 p.m. to 9:00 p.m.
http://www.academyart.edu/map.html

sponsored by,
Academy of Art University - http://www.academyart.edu/
Ballistic Publishing - http://www.ballisticpublishing.com/

Thursday June 22, 2006
Invisible Visual Effects--Compositing in Mission: Impossible III
by Todd Vaziri, Digital Compositor, Industrial Light & Magic

Cinematic visual effects artisans have been creating illusions for film for well over a century. In most cases, the images created for films fall in one of two categories. Visual effects can either support the scope of the film, advancing the other-worldy spectacle of the story, or support the narrative in its rawest form, with its presence as invisible as the production's cinematography, editing, and production design, allowing characters and story to take center stage.

Invisible visual effects have progressed to a staggering degree with the rise of digital compositing. Advanced 2D, 3D and hybrid techniques give filmmakers a much larger canvas on which to paint. It is up to the digital effects artist to reign in spectacle and understand cinematic reality. We will look at the invisible visual effects of J.J. Abrams' "Mission: Impossible III" and illustrate how compositing helped bring the director's vision to life, bringing synthetic, hand-crafted images to the screen with the utmost importance placed on invisibility. We will look at specific shots and discuss the techniques and tools that made it happen. Examples from the earliest days of cinema, to specific case studies of modern effects films will also be discussed.

About the speaker...

Todd Vaziri
Digital Compositor
Industrial Light & Magic

Todd Vaziri joined Industrial Light & Magic in 2001, and has contributed to Star Wars: Episode II "Attack of the Clones," Signs, The Hulk, Pirates of the Caribbean and Van Helsing.

Vaziri has worked to expand the role of compositor at Industrial Light & Magic by going beyond the confines of straightforward compositing. He consistently takes on shots which not only require 2D compositing but matte painting, particle effects and 3D compositing. This approach allows much more flexibility and efficiency to the production.

Vaziri grew up outside of Chicago, and graduated from Northwestern University in 1995 with a Bachelor of Arts degree in Film and now resides in San Francisco with his wife. Prior to joining ILM, Vaziri worked on several films as a lead artist and as compositing supervisor, such as Dr. Dolittle, American Pie, Driven and Hart's War. Vaziri also created the now-retired Visual Effects Headquarters web site which covered the visual effects industry (http://www.vfxhq.com).

ILM CREDITS
Feature Films

2006MISSION: IMPOSSIBLE III (currently in production) - Sequence Supervisor
2005WAR OF THE WORLDS - Compositor
2005STAR WARS: EPISODE III "Revenge of the Sith" - Compositor
2004SKY CAPTAIN AND THE WORLD OF TOMORROW - Compositor
2004VAN HELSING - Lead Compositor
2003PIRATES OF THE CARIBBEAN: THE CURSE OF THE BLACK PEARL - Compositor
2003HULK - Compositor
2002SIGNS - Compositor
2002STAR WARS: EPISODE II "ATTACK OF THE CLONES" - Compositor

Thursday May 18, 2006
A History of PDI: A 25 Year Retrospective
by Richard Chuang, co-founder of PDI

Take a personal journey through 25 years of the CG industry: a personal view of how the industry has changed, from one who has experienced evolution firsthand.

There were no tools or applications to do animation at that time. Aspiring computer animators saw the crude animations presented at the SIGGRAPH Conference Electronic Theater by academic researchers, and dreamed of the potential of using the computer to help convey the lively visual stories that were present only for instants in their creative minds.

In 1981, Richard Chuang, along with Carl Rosendahl and Glenn Entis, started Pacific Data Images in an empty warehouse. They slaved themselves to their desks, surveying all of the computer graphics literature, and writing software to build the tools, the business and the vision. They took on a few companies as clients and produced some stunning graphics for television, and moved later to commercials and visual effects, Eventually, they got the chance to do some animations for story- telling's sake, and started making feature length movies.

In this presentation, we tell the story of a journey from the startup of a pioneering CG animation studio through the production of mega-hit like Shrek. We follow the evolution of the studio from Pacific Data Images to PDI to PDI/DreamWorks Animation. We take you through the turbulent shake-up of the industry, that left only a handful of studios remaining. We identify the critical points in technology and the business that revolutionized the industry.

Not to be solely a history lesson, Richard will also be showing highlights of the 25 years of animation productions that built the PDI studio from the early 80's through today.

About the speaker...

Richard Chuang
co-founder of PDI

A co-founder of PDI over 25 years ago, Richard Chuang helped create the studio's powerful proprietary animation system, which the Academy of Motion Picture Arts and Sciences recognized with a Technical Achievement Award in 1997. PDI/DreamWorks proprietary animation system has been used on countless commercials, live-action features and throughout PDI/DreamWorks' feature films, ANTZ, the Academy Award-winning SHREK, and SHREK 2 for which Chuang served as head of special projects.

Known for his hands-on creative approach, Chuang's expertise is in computer animation and visual effects for both animated and live action films. He served as Visual Effects Supervisor on several DreamWorks film including EVOLUTION, LEGEND OF BAGGER VANCE and FORCES OF NATURE.

Richard also led the team that created digital superheroes in the Warner Bros. blockbuster BATMAN AND ROBIN. These digital heroes were a follow-up to his earlier pioneering digital character work in BATMAN FOREVER. He has 16 live action film credits in the visual effects area.

Richard helped plan and guide the CG production for FATHER OF THE PRIDE, DreamWorks first all computer animated primetime TV show, created for NBC. Currently, he is part of the executive group responsible in planning and forecasting future projects.

Thursday April 20, 2006
Grokking Emerging Multicore Processor Architectures and How to Leverage Them for Future Applications
by Yahya H. Mirza

With the STI CELL architecture, the DARPA HPCS effort and the recently announced NSF Petascale acquisition effort, a new generation of computational capabilities are now starting to emerge. Unfortunately, exactly how these powerful new computational capabilities will be utilized for a commercially viable next generation killer app can be considered an open question. When looking for new opportunities, a useful exercise can be to revisit old ideas in conjunction with emerging new technologies. To facilitate this quest, this presentation will survey emerging multicore architectures such as the STI CELL, the SUN Rock, the Cray Cascade, as well as future X86 architectures and illustrate their commonalities and differences. We will look at the problems these new chips are being designed to solve and then map to these solutions, their architectural and micro-architectural techniques they utilize, such as superscalar, multithreaded, simultaneous multithreading, vector architectures, short-vector SIMD, stream architectures, etc. We will illustrate how these solutions balance the need for high performance computation while maintaining additional design goals such as minimum power and die size. Additionally, I'll illustrate the tradeoffs as to functionality supported in hardware vs. functionality, which is supported by the compiler and the programmer.

When looking to accelerate the algorithms in one's application using a new architecture, its useful to understand both the application's opportunities for parallelism, and the systems support for various kinds of parallelism. Given a target system, and a problem: either the problem's algorithms can be mapped to the target; or alternatively new algorithms can be developed explicitly for that target system. To guide us, I'll discuss a common set of parallel patterns that have been utilized in the past to create high performance applications. We will then discuss how these parallel programming patterns map and are explicitly supported and or enabled by the various architectural and micro-architectural elements of the emerging multicore architectures. Next we will discuss the tools we will need to program and thus explicitly leverage these multiple parallel capabilities. To illustrate the technical issues involved, we will discuss a few parallel programming models that have been developed in the past such as pThreads, MPI, and OpenMP, and how the processor architectures / computing systems at the time influenced the design of these solutions. Our goal is to provide counter-weight to the often knee-jerk reaction of prematurely selecting a particular programming tool to address a particular problem without fully considering the implications involved.

A useful parallel programming model and parallel runtime for these emerging multicore based systems, will require the usage of parallelism in all its forms including: message passing, systolic arrays, data parallelism (in its many forms including: loop parallelism, short-vector SIMD parallelism, long vector parallelism, streams parallelism), asynchronous futures, SPMD parallelism, parallel collections, etc. The objective of this presentation is to provide intuition and holistic insight on what makes emerging multi-core architectures like the CELL, different from conventional processors, and how these differences impact how we can program and utilize scalable systems built from these new architectures to create our next generation "killer apps".

About the speaker...

Yahya H. Mirza
Aurora Borealis Software LLC
15817 NE 90th St. Suite E340
Redmond WA 98052

Yahya Mirza's original background was aeronautical engineering and was initially employed by Battelle Labs in Columbus Ohio. After transitioning into the software industry, Yahya spent three years as a visiting researcher in the UIUC Smalltalk Research Group. Through his company, Aurora Borealis Software LLC, Yahya has worked on system software projects for Microsoft, CatDaddy Games, Source Dynamics and Pixar Animation Studio's RenderMan team. For the last five years, Yahya has been organizing the Language Runtimes workshops, held at the OOPSLA and Supercomputing conferences. Yahya's interest in scalable multicore computing systems is driven by his passion to create a real-time interactive feature film. Yahya is currently working on a new distributed programming model to explicitly leverage large-scale clusters built from emerging multicore processors.

Thursday March 16, 2006
Digital Light Field Photography
by Ren Ng

Conventional photographs record the sum of all light rays striking each position on an image plane. This dissertation explores how digital photography can be improved by instead recording the light field inside the camera: not just the position of each light ray, but also the direction that it is traveling. This talk will discuss the design, prototyping and performance of a modified digital camera that records the light field in a single photographic exposure.

By resampling the recorded light rays, it is possible to compute final photographs more flexibly and at higher quality than in a conventional camera. For example, in digital refocusing we compute final photographs that are focused at different depths from a single shot. Theory predicts, and experiments corroborate, that we can reduce the misfocus blur anywhere in the photograph by a factor approximately equal to the directional resolution of the recorded light rays. In another application, we digitally correct lens aberrations by re-sorting aberrated light rays to where they should ideally have converged. This increases the quality of final photographs by raising the contrast.

Here are some photographs taken with the prototype camera:
http://graphics.stanford.edu/papers/lfcamera/refocus/

Ren Ng is close to completing his PhD in the Computer Science department at Stanford. His focus is on digital photography systems, computer graphics and applied mathematics. He received his B.S. in Mathematical and Computational Science from Stanford.

Thursday February 9, 2006
Games for Teaching Medicine
by Parvati Dev, PhD and LeRoy Heinrichs, M.D., Ph.D.

Simulation technology is being used to address all aspects of health care delivery and management. The primary use is in education but the potential for work flow improvement and risk reduction are new areas of consideration. We will examine a range of patient simulations from simple multimedia patient records to patients who respond through speech and video. We will touch on novel sensory modalities such as haptics and stereo and their use in surgical procedure simulators. Finally we will discuss the potential of modeling teams of people interacting with a simulated patient, and the use of gaming technology to implement these virtual worlds.

Parvati Dev, PhD
Director, SUMMIT; Associate Dean, Learning Technologies
Office of Information Resources and Technology
Stanford University School of Medicine

Parvati Dev completed her doctoral degree in Electrical Engineering on computer models of the brain at Stanford University in 1975. She has worked on the research and teaching staff at M.I.T., Boston University, and Stanford, and spent seven years in industry, where she developed products for three-dimensional medical imaging. Since January 1990, she has been Director of SUMMIT, a research lab at Stanford. Her current research is in virtual reality and simulation for medical education.

LeRoy Heinrichs, M.D., Ph.D.
Associate Director, SUMMIT Lab; Emeritus Active, Dept. of GYN/OB
Stanford University School of Medicine

In 1976, Dr. Heinrichs was appointed Professor and Chairman of Gynecology and Obstetrics at Stanford University where he performed and taught laparoscopic surgery, and recognized the potential for teaching this type of surgery with Virtual Reality systems, like those used by the aviation industry in training pilots. Finding no emerging technologies, he began a start-up company that failed, but retired to SUMMIT where he developed an anatomically correct, human 3D model (Lucy v.2.6) as the virtual anatomy in pelvic surgical simulators. Dr. Heinrichs also initiated a project with Immersion, Inc. for developing a hysteroscopy trainer, now a commercially available trainer. At SUMMIT, where he is now Associate Director, he and colleagues are developing anatomy and surgical simulation projects for distribution over the Next Generation Internet (NGI). His designation, along with SUMMIT, in 2002 for the 8th Annual Satava Award by the Medicine Meets Virtual Reality organization was for his leadership in the field of surgical simulation. He is PI on a Wallenberg Foundation planning grant designing a 3D World for training of medical teams for crisis management of trauma. Dr. Heinrichs writes on Surgical Simulation and lectures widely on this topic.

January 19, 2006
The SIGGRAPH 2005 Electronic Theatre
by Samuel Lord Black

Video presentation of cutting edge computer animation from the 2005 SIGGRAPH conference.

This special screening of the most astounding achievements computer graphics animated shorts from last year was the highlight experience of the 2005 SIGGRAPH computer graphics conference in Los Angeles.  This year's presentation will include footage that wasn't presented at SIGGRAPH.  The Electronic Theater Program contains 20 pieces covering animation, effects, storytelling, and visualization. The public is invited and welcome. The content is entertaining and educational for a general audience, not just those with a technical or artistic bent.

Samuel Lord Black has been in the software business for about 20 years, primarily in the graphics field in one form or another. He holds Bachelor's Degrees in Electrical Engineering and Computer Engineering from the University of Michigan, and a Masters Degree in Computer Science from the University of North Carolina. He worked for several years in the workstation and desktop computing industry for Apollo, Stellar, and Masscomp as a pioneer in the X Window System, with an emphasis on its applications in real-time computing.  Then, ten years ago, he went to work in the video game industry (Papyrus Design Group) and worked on several projects, with game credits including NASCAR Racing, IndyCar Racing, and Road Rash. He spent eight years working on rendering software for Pixar Animation Studios, followed by a stint as the Chair for the SIGGRAPH 2005 Computer Animation Festival.  He is currently working as a graphics software engineer for Autodesk, Inc.



2005 Events:

Thursday, November 17, 2005
Recent advances in Photoshop technology
by Ashley Manning and Todor Georgiev

Ashley will be giving a concise tour of some of the most exciting tools added to Photoshop in the last few releases. Her presentation will provide a lead-in to a technical discussion of what's going on behind the scenes.

Todor will talk about the science behind some of your favorite Photoshop tools like the Healing Brush. This will include introduction to image processing in the gradient domain, emphasizing perceptually correct image processing based on invariance to change of illumination.

Ashley Manning is a Product Manager for Adobe Photoshop, where she focuses on new markets and technologies for Photoshop. Prior to joining Adobe, Ashley worked with an Entrepreneur in Residence at Sutter Hill Ventures developing and evaluating business plans and at eCircles.com, one of the first online sites offering photo-sharing and editing. Ashley holds an MBA from Stanford Business School and a B.A. from Yale University.

Todor Georgiev has a Master's Degree in Theoretical Physics from Sofia University (1987), and a PhD in Theoretical Physics from Southern Illinois University (1995). He has been working in Adobe Photoshop since 1997, focusing on transfer of mathematical methods from Theoretical Physics to Imaging. He is interested in color, perceptual and Gradient Domain imaging, HDR, Light Fields and Computational Photography. He has published numerous papers and holds more than 10 patents related to image processing.

Thursday October 20, 2005
Mobile Games of the Future: Total Immersion on a Small Screen
by Justin Radeka

Discussion points:

  • Enabling Technologies
  • OpenGL 2.0, which is breaking down the boundaries of game design for multiple platforms - from Playstation to mobile
  • Falanx recently introduced the Mali200, this graphics core is fully programmable.
  • Mixing Media (video, 3D, audio, 2D, still images)
  • Types of Games
  • Visual Quality
  • Next level of games in mobile gaming
  • Extended games, wireless worlds
  • Immersive gaming on a 2" screen

Justin Radeka serves as Vice President of Developer Relations for Falanx. Prior to joining Falanx, he served as CTO of EVGA, a graphics board manufacturer, and as head of the gaming group at Hewlett Packard. As the leader of the Developer Relations group at Falanx, Justin is working with top tier developers and publishers to develop the industry strategies in bringing the latest mobile graphics technologies to tomorrow's games. Radeka holds a BA from the University of California.

Thursday, September 15, 2005
Model. Paint. Render. modo 201
by Dion Burgoyne and Allen Hastings

With the New release of Modo 201 announced at the SIGGRAPH 2005 Internationational Conference, the developing program adds features for painting, and rendering to its already impressive modeling power. No other program is as fast or formidable in Subdivision Surfaces. (Ed Catmull, et al. Siggraph 1989)

Dion Burgoyne will take you through the new features of the Modo 201. Current 3D applications are taking notice of the developments in Modo. You will be surprised and impressed with the speed and power.

Allen Hastings will take you into the details and wonders of the rendering engines he has been developing. Blazing speed and remarkable results are the result of study and experimentation into the methods of rendering. Come and see a wizard at this black art and appreciate what promises to be a benchmark program from one of the last American 3D software purveyors.

"modo 201 is not the first 3D environment to incorporate modeling, painting and rendering," explains Brad Peebler, president and co-founder of Luxology. "However, modo is the first environment built from the ground up to integrate these workflow steps in a way that dramatically improves the ability of 3D designers to focus more on creating and less on process and repetitive steps."

modo is the fastest, most advanced subdivision surface modeling environment available. Specifically designed to help 3D artists working on games, films, TV, print, architecture and Web productions, modo enables artists to accomplish more in less time. While the current version of modo focuses on delivering an elegant and advanced environment for accelerated 3D modeling, modo 201 takes these benefits to unprecedented levels across modeling, painting and rendering.

Allen Hastings: Chief Scientist. Allen Hastings' far-reaching vision is always tempered by mathematical accuracy and flawless execution. With a background in fine art and music, Allen first started writing software to make 3D digital movies at a time when few believed it was possible. His efforts were some of the first off-the-shelf 3D packages available. He created NewTek's LightWave 3D in 1989 and remained the primary force behind its animation and rendering algorithms through 2001. Allen received an Emmy for his work on LightWave 3D in 1993, and in 2001 Animation Magazine named him one of the top 15 most influential people in the animation industry. At Luxology, Allen continues to make significant original contributions to the art and science of 3D.

Dion Burgoyne. Dion Burgoyne came to Luxology to fill the role of Content Artist and Quality Assurance manager. Dion is a film maker, photographer and technical artist and brings to Luxology a solid balance of art and science, just as we like. You will find Dion actively assisting the modo community through forum posts and training videos online as well as through the creation of PERL scripts that add functioinality to modo users through the third party website www.vertexmonkey.com.

July 31 - August 4, 2005
ACM SIGGRAPH Conference in Los Angeles California

Walter Vannini created the article "Attending SIGGRAPH 2005" for the Silicon Valley SIGGRAPH Chapter

Here is a link to the article:
http://silicon-valley.siggraph.org/MeetingNotes/siggraph2005.htm

Thursday June 16, 2005
Advanced Physics for Multi-Processor Scalability
by Tom Lassanske

This talk explores how a multi-threaded physics API can help a developer make the most of the extra power afforded by multi-processor platforms for achieving more realistic worlds--while still maintaining compatibility with current generation consoles, if necessary. The AGEIA PhysX hardware will also be demonstrated in a couple sample applications.

Tom Lassanske received a BS in Mechanical Engineering from the University of Wisconsin and a MS in Computer Science from the University of North Carolina at Chapel Hill--where his focus was on physical simulation and computer graphics. For the last several years, he has worked with the Games Industry writing and integrating collision and dynamics middleware from NDL, Havok and now AGEIA. Formerly, Tom learned all about customer service and real-world statics and dynamics (including some bloody-ragdoll effects!) as the 10-year sole proprietor of a building construction firm."

Wednesday May 25, 2005
Digital Cinema: The History, The Technology, The Deployment
by Bill Mead, Bill Kinder and Walt Husak

Topics

  • The Historical Evolution of Digital Cinema
  • Issues of Deployment
  • Political Issues
  • Why is Digital Cinema a Complex Problem?
  • Cinema Fundamentals
  • Digital Cinema Overview
  • Standards
  • Timeline of Digital Cinema at Pixar
  • Advantages of Digital Projection over Film
  • Business Considerations
  • Authoring Issues
  • Hidden Roadblocks

Speakers

Bill Mead
founder and publisher of DCinemaToday.com

Bill Mead is the founder and publisher of DCinemaToday.com, an on-line publication focused on the emerging digital cinema industry. Bill has plenty of experience in market development for cinema technologies. For two years prior to launching DCinemaToday in 2003, Bill consulted with TI's DLP Cinema group. Bill previously spent six years as VP of Marketing for Sony's cinema sound group (SDDS) and 19 years with Dolby Laboratories in range of cinema-related technical and market development positions. Bill is a member of SMPTE's DC28 digital cinema standards committee and is a past director of the International Theatre Equipment Association (ITEA).

Bill Kinder
Director, Editorial & Post Production
Pixar Animation Studios

Bill Kinder leads Pixar's efforts in Digital Cinema, realizing the first ever digital theatrical release of a digitally produced film, Toy Story 2. He has pioneered digital mastering methods at the studio since the all digital release of A Bug's Life on DVD. He produced the DVD for Finding Nemo. Most recently he oversaw Pixar's largest international digital cinema release to date, The Incredibles.

Walt Husak
Senior Manager, Electronic Media
Dolby Laboratories, Inc.

Walt Husak is Senior Manager, Electronic Media at Dolby Laboratories. He began his television engineering career at the Advanced Television Test Center (ATTC) in 1990 carrying out video objective measurements and RF multipath testing of HDTV systems proposed for the ATSC standard.

Prior to joining Dolby, Walt has worked on issues related to the deployment of HDTV systems in the US and the rest of the world. Walt demonstrated the world's first Digital On-Channel Repeater to extend Digital Television signals into obstructed areas. He developed a mechanism for capturing RF signals for offline analysis and distribution to receiver vendors. Walt spent many years lecturing on topics such as video compression, RF transmission for DTV, and overcoming multipath signals in urban and rural environments.

Walt joined Dolby in 2000 as the first video compression expert for Digital Cinema and has spent the last several years studying and reporting on advanced compression systems for Digital Cinema, Digital Television, and HD DVD. He has managed or executed visual quality tests for DCI, ATSC, Dolby, and MPEG.

Walt is now a member of the New Technologies Team focusing his efforts on JPEG2000 and advanced MPEG codecs for Digital Cinema and Digital Television. Walt provides industry lectures on Digital Cinema systems and image compression. Walt has authored a number of articles and papers for a number of major industry publications.

Walt is currently Vice-Chairman of SMPTE DC28 and the liaison to and from MPEG and JPEG. He served as the Chairman of the MPEG Digital Cinema Group and as secretary of the SMPTE DC28 Projection Group. He also served as a member of the ITU task group on Digital Cinema. Walt is a member of SMPTE, MPEG, JPEG, IEEE, and SPIE.

Thursday April 21, 2005
CELL: A New Platform for Digital Entertainment
by Mark DeLoura and Dominic Mallinson

This presentation from Sony Computer Entertainment gives a technical overview of the first CELL processor, touching on hardware architecture, programming model and software strategy. The CELL project is a joint venture between Sony, Toshiba and IBM to create a new generation microprocessor architecture.  After four years of design and development, the first CELL processor was recently unveiled at the ISSCC semiconductor conference.

Thursday March 24, 2005
Integration of Combustion 4 and 3D Studio Max 7
by Hagé van Dijk

Hagé van Dijk is a computer industry veteran and digital media pioneer working in software and hardware development for over 17 years. At Apple Computer he was a member of the original QuickTime Team and contributed to the development of products that include: Apple Midi Manager, QuickTime 1.0, QuickTime Starter Kit, QuickTime 3.0 and Apple DVD player, Sound and MIDI compatibility for the MacOS (v6.04-8.6), Other critically acclaimed products include; Radius VideoVision and VideoVision Studio, Radius Telecast, Radius Studio Array, Digital Origin's EditDV (first DV based NLE application which kicked off the "DV revolution"), and the award-winning Cleaner family of encoding products (Terran/Media100, Discreet). Recently he was compression and visual effects product specialist (AE) at Discreet for cleaner integration and combustion. In this role he continued to develop and evangelize cleaner technology as well as focusing on 2d/3d integration between 3ds Max and combustion, integration with discreet systems (flame, inferno) and use of vector based paint for restoration, wire removal, rotoscoping, and visual effects. He advises and supports clients in the broadcast and computer industries by creating settings, content, lectures, seminars, webcasts and demonstrations covering many aspects of visual effect creation, media publishing and distribution for all formats. Hagé has presented for the National Association of Broadcasters (NAB), Streaming Media, Apple Computer, SF Art Institute, and seminars internationally. As a creative professional, Hagé has contributed to all phases of product design, development, technical marketing, and support.

Thursday February 24, 2005
Graphical WOW: Art, History and Technology of Flaming Pear
by Lloyd Burchill

Enhance your graphics with impact. Easily create planets, water, weathered surfaces, organic textures, illumination and mood, richness and complexity -- all in very simple ways. Unusual software produces dramatic effects useful to photographers and digital illustrators alike.

Lloyd Burchill will demonstrate Flaming Pear's increasingly inexplicable Photoshop plug-ins, which draw on science fiction, obscure photographic techniques, panoramas, and now image fusion. How does this indie graphics developer thrive? By staying resolutely small, being weird, and keeping up with the research.

Along the way he'll outline the story of this rewarding niche and explain the interesting technology under the hood. Amid abundant pretty pictures you'll hear about about the intertwining pressures of tech, aesthetics, and audience, the influence of Siggraph and the importance of dirt. He'll reveal what's next and share useful tricks he's picked up over the years.

February 3, 2005
How well do people have to see each other for visual telecommunications to work well, or would people prefer not be seen precisely?
byJaron Lanier

Visual telecommunication technologies have never met human factors requirements. Worse, the full extent of the requirements is not yet known. We are learning a lot, however. It is now possible to propose lines of demarcation between high-end video conferencing and tele-immersion. Tele-immersion ought to convey certain cues that demonstrably matter to the outcome of communication even if non-specialist participants are not able to articulate the nature of these cues as they are experiencing them. Fundamental physical limits make the design of high performance tele-immersive interfaces difficult, but strategies are emerging that are likely to overcome known barriers. We are close enough to reaching the goal of quality tele-immersion that it's time to ask how tele-immersive fidelity might need to be limited in precise ways to bring out the best in the curious social species tele-immersion is intended to help.

Jaron Lanier is a Visiting Scientist at SGI, and he is also an External Fellow at the International Computer Science Institute at Berkeley

January 20, 2005
SIGGRAPH Electronic Theater
Introduced by Christoph Bregler

This special screening of the most astounding achievements computer graphics animated shorts from last year was the highlight experience of the 2004 SIGGRAPH computer graphics conference in Los Angeles. This year's presentation will include footage that is not on the DVDs. The Electronic Theater Program contains 20 pieces covering animation, effects, storytelling, and visualization.

Christoph Bregler is an Associate Professor at the Media Research Lab at NYU. His primary research interests are in vision, graphics, and modeling. He is currently focused on the animation of human movements, This includes visual motion capture, human face, speech, and full body motion analysis and synthesis.

2004 Events:

December 16, 2004
Full Spectrum Command: A commercial-platform training aid
by Michael Van Lent

In the commercial platform training aids project, the University of Southern California's Institute for Creative Technologies (ICT) seeks to develop training aids that capitalize on the technologies, development practices, and hardware of the commercial games industry. To date the project has produced Full Spectrum Command, a PC-based training aid currently in use at Ft. Benning, and Full Spectrum Warrior, an Xbox-based training aid currently undergoing pedagogical evaluation by the Army Research Institute. A modified, commercial version of Full Spectrum Warrior was also the #1 selling Xbox title in June of 2004. The story behind this unique integration of military training tool and commercial entertainment product includes elements of research in educational design and artificial intelligence as well an interesting exploration of the business model behind commercial game console.  In this talk Dr. van Lent will discuss the process that led to the development of Full Spectrum Warrior, the academic research involved, and present a demonstration of the differences between the military training version and the commercial entertainment version.

Dr. Michael van Lent completed his Ph.D. in 2000 under the direction of Dr. John Laird at the University of Michigan. His dissertation, titled Learning Task-Performance Knowledge through Observation, explored how knowledge for real-time intelligent agents, such as simulated tactical air combat pilots, could be acquired from observations of human experts more quickly and cheaply than traditional knowledge acquisition techniques. After a one year post-doc at the University of Michigan, Dr. van Lent joined the University of Southern California's Institute for Creative Technologies (ICT) as a research scientist. Dr. van Lent was the lead researcher for ICT's Commercial Platform Training Aids project which resulted in Full Spectrum Command, a PC-based company command training aid, and Full Spectrum Warrior, an Xbox-based squad leader training aid. Now a project leader, Dr. van Lent's areas of interest include advanced AI techniques for commercial computer games, the use of commercial game technology for training and education, and the integration of commercial game technology with military simulation technology. His current resea rch focuses on explainable artificial intelligence and adaptive opponents for military training and computer games. Dr. van Lent is also active in the growing collaboration between academic researchers and commercial game developers as the Editor-in-Chief of the Journal of Game Development and a frequent contributor to the Game Developer's Conference.Dr. Michael van Lent completed his Ph.D. in 2000 under the direction of Dr. John Laird at the University of Michigan. His dissertation, titled Learning Task-Performance Knowledge through Observation, explored how knowledge for real-time intelligent agents, such as simulated tactical air combat pilots, could be acquired from observations of human experts more quickly and cheaply than traditional knowledge acquisition techniques. After a one year post-doc at the University of Michigan, Dr. van Lent joined the University of Southern California's Institute for Creative Technologies (ICT) as a research scientist. Dr. van Lent was the lead researcher for ICT's Commercial Platform Training Aids project which resulted in Full Spectrum Command, a PC-based company command training aid, and Full Spectrum Warrior, an Xbox-based squad leader training aid. Now a project leader, Dr. van Lent's areas of interest include advanced AI techniques for commercial computer games, the use of commercial game technology for training and education, and the integration of commercial game technology with military simulation technology. His current resea rch focuses on explainable artificial intelligence and adaptive opponents for military training and computer games. Dr. van Lent is also active in the growing collaboration between academic researchers and commercial game developers as the Editor-in-Chief of the Journal of Game Development and a frequent contributor to the Game Developer's Conference.

November 16, 2004
Point Primitives and Pointshop3D
by Scott Draves

Point primitives have experienced a major "renaissance" in recent years, and considerable research has been devoted to efficient representation, modeling, processing, and rendering of point-sampled geometry. There are two main reasons for this new interest in points: First, we have witnessed a dramatic increase in the polygonal complexity of computer graphics models. The overhead of managing, processing, and manipulating very large polygonal-mesh connectivity information has led many leading researchers to question the future utility of polygons as the fundamental graphics primitive. Second, modern 3D digital photography and 3D scanning systems acquire both the geometry and the appearance of complex, real-world objects. These techniques generate huge volumes of point samples, which constitute discrete building blocks of 3D object geometry and appearance, much as pixels are the digital elements for images.

Pointshop3D is a system for interactive shape and appearance editing of 3D point-sampled geometry. By generalizing conventional 2D pixel editors, it supports a great variety of different interaction techniques to alter shape and appearance of 3D point models, including cleaning, texturing, sculpting, carving, filtering, and resampling. One key ingredient of the framework is a novel concept for interactive point cloud parameterization allowing for distortion minimal and aliasing-free texture mapping. A second one is a dynamic, adaptive resampling method which builds upon a continuous reconstruction of the model surface and its attributes. These techniques allow to transfer the full functionality of 2D image editing operations to the irregular 3D point setting. Pointshop3D system reads, processes, and writes point-sampled models without intermediate tesselation. It is intended to complement existing low cost 3D scanners and point rendering pipelines for efficient 3D content creation.

Mark Pauly is a postdoctoral scholar with the Guibas lab at Stanford University. His research interests include point-based graphics, geometric modeling, physics-based animation, and shape analysis. He received his PhD in 2003 from the Swiss Federal Institute of Technology, Zurich, Switzerland. He is a contributor to Pointshop3d, an open-source framework that facilitates design of new algorithms for point-based graphics.

August 8 - 12, 2004
ACM SIGGRAPH Conference in Los Angeles California

Diane E. Shapiro created the article "Job Searching at SIGGRAPH 2004" for the Silicon Valley SIGGRAPH Chapter

Here is a link to the article:
http://silicon-valley.siggraph.org/MeetingNotes/siggraph2004.htm

June 3, 2004
Electric Sheep: animating and evolving artificial life-forms
by Scott Draves

Electric Sheep is a distributed screen-saver that harnesses idle computers into a render farm with the purpose of animating and evolving artificial life-forms. Each clip of animation has a genetic code, and the collective voting of users determines its fitness. In the next version a P2P network distributes the bandwidth of sharing the video and votes.

Scott Draves a.k.a. Spot is a visualist and programmer residing in San Francisco. He is the creator of the Fractal Flame algorithm, the Bomb visual-musical instrument, and the Electric Sheep distributed screen-saver. All of Draves' software artworks are released as open source and distributed for free on the internet. His award-winning work has appeared in Wired Magazine, the Prix Ars Electronica, the O'Reilly Emerging Technology Conference, and at the Sonar festival in Barcelona. In 1997 Spot received a PhD in Computer Science from Carnegie Mellon University for a thesis on metaprogramming for media processing. Today he regularly projects live video for underground parties and at clubs, and he just released SPOTWORKS, a DVD of abstract animation synchronized with music. http://spotworks.com, http://draves.org

May 13, 2004
Matrix Revolutions: Techniques and Methodologies With Large Scale Sentinel 'Swarm' Scenes
by Mike Morasky

This presentation will involve the techniques and methedologies used in the large sentinel swarm scenes in the Matrix Reloaded and Matrix Revolution movies.

Mike Morasky was a lead Technical Director on the "siege" sequence of the Matrix sequels (Reloaded and Revolutions) at ESC Entertainment. He was involved in the design and implementation of the production pipeline for that sequence and specifically in charge of handling the sentinels and their "swarming" system. Before that he worked at Weta on the "Lord of the Rings" trilogy where he was a Lead Technical Director in the Massive department. Mike is currently a CG Supervisor for "Circle-s FX" working on "Catwoman".

April 22, 2004
Multi-Threading Real-Time 3D Graphics
by Dean Macri

Real time 3D graphics applications continue to demand higher performance platforms. One way to increase the performance of your application is to multi-thread to take advantage of multiprocessor systems as well as systems with Intel processors with Hyper-Threading technology. This presentation will discuss threading concepts and how they apply to multiprocessor systems and Hyper-Threading.

Dean Macri has a B.A. in mathematics and computer science with a minor in physics from St. Vincent College, Pennsylvania, and an M.S. in computer science from the University of Pennsylvania. After completing his master's degree in 1992, he spent five years developing highly optimized C, C++, and assembly language routines for a 2D graphics animation company. He joined Intel in January, 1998 to further pursue his interests in 3D computer graphics. Dean is currently a staff technical marketing engineer in the Software and Solutions Group at Intel. He works primarily with game developers to help them optimize their games for present and future processor architectures and take advantage of the processing power available to enable exciting new features.

April 8, 2004
Making and Measuring Effective Virtual Environments
by Fred Brooks

Hosted by the Software Development Forum (SDForum)

Event Co-hosted by
BayCHI, SEM SIG
Silicon Valley ACM SIGGRAPH

Series Co-Hosts
Computer History Museum, CSPA, ACM San Francisco Bay Area Chapter

Fred Brooks is a legendary figure in computing. He led the development of the IBM System 360, wrote "The Mythical Man-Month: Essays on Software Engineering", and founded the Computer Science department at the University of North Carolina. His many awards include the National Medal of Technology, the A.M. Turing award of the ACM, the Bower Award and Prize of the Franklin Institute, and the John von Neumann Medal of the IEEE.

In this talk, Dr. Brooks will discuss his current work in virtual environments. The Effective Virtual Environments project at Chapel Hill is trying to determine which technological factors are crucial, which important, and which are negligible in making virtual environments illusions effective. Says Brooks, "We have studied eight different factors so far, with interesting and sometimes surprising results. I shall briefly describe the experiments and the chief findings."

March 25, 2004
3D Display Technology - from Stereoscopes to Autostereo displays
by Ian Matthew

In this presentation, Ian Matthew will discuss the history of 3D display devices and the technology involved in bringing out-of-the-screen 3D effects to a viewing audience. The presentation will discuss how we see in 3D, how 3D display technology works, what is involved in displaying 3D on computer monitors and the software requirements involved in making this happen.

In the past few years, the Holy Grail has been to achieve 3D viewing experiences without encumbering the viewer with glasses. A number of companies have achieved this to some extent. This presentation will also cover how this has been achieved and will introduce Sharp's new switchable glasses-free 3D LCD display technology. The new Sharp Actius RD3D laptop computer which includes the 3D switchable LCD will be demonstrated at the end of the presentation.

Since 3D display technology also requires collaboration between computer software and content, Sharp announced the formation of a 3D Consortium. The goals of this consortium will be covered and an attempt will be made to look into the crystal ball.

Ian Matthew is 3D Business Development Manager for Sharp Systems of America. He is responsible for marketing activities and developer partner program for the Sharp 3D laptops and displays. In this role he is continually adding new software providers to the Sharp Actius RD3D platform.

Ian recently joined Sharp from StereoGraphics Corporation, a company based in San Rafael, California which produces 3D viewing glasses for the professional market. At StereoGraphics, Ian held a number of positions, including Director of Marketing and Director of Product Management. As Product Manager for the StereoGraphics SynthaGram, autostereo display, Ian played a major role in bringing their first glasses-free display to market.

Prior to joining StereoGraphics, Ian's background was in the Computer Aided Design industry, and has worked for the major companies including Intergraph Corporation, Autodesk, Rebis Industrial Workgroup, and Dassault Systemes. He worked in software marketing at these companies. Ian has a B.Sc.Tech and a MSc. tech. in Chemical Engineering and Fuel Technology, from Sheffield University in the UK.

February 12, 2004
Valentine's Day Special
Friendship, Dating and Fun in the Virtual World of "There" (www.there.com)
by Will Harvey and friends

Science fiction authors have been writing about virtual worlds for twenty years, and entrepreneurs have been chasing that vision for almost as long, looking for the elusive killer app.  Could online socializing finally be it?  Online socializing has become a mainstream phenomenon, but remains predominantly a medium of typing text.  Is text the end of the line, or is online socializing destined to become a medium in which people can communicate and interact as avatars?  In the six years of developing There, we attempted to create an online world where communicating as avatars felt natural and fun, like communicating in the real world.  We found the challenge to be much more difficult and subtle than we ever imagined.

This presentation chronicles the challenges There had to overcome in its development, the solutions that they found, and the lessons they learned, giving examples from some of the earliest prototypes to the live service that is running today.

Will Harvey - Founder of There
Will is a seasoned entrepreneur with a strong background in computer science, software, and video game development. Will founded There in 1998 out of a small room in his parent's house, where he recruited the technology team and built an end-to-end prototype before raising capital to grow the company and hire the management team.

Before founding There, Will ran the dynamic media products at Adobe Systems, including AfterEffects and Adobe Premiere, the world's leading video editing program. Will came to Adobe when Adobe acquired Will's previous company, Sandcastle, which Will had founded to develop network technology to enable low latency interaction over the internet. Prior to Sandcastle, Will served as Vice President of Engineering at Rocket Science Games in San Francisco, where he led the company's transition from full motion video based games to games focused on interactivity.

Prior to Rocket Science, Will founded and ran several successful game development companies while simultaneously earning his Bachelor's, Master's and Doctorate degrees in computer science from Stanford. Will's doctoral thesis introduced several important search algorithms which are now used commercially in manufacturing scheduling.

Will's game companies produced Platinum and Gold game titles including Zany Golf, Immortal, and Music Construction Set, with combined sales of over a million units. Will has filed 5 patents related to networking, graphics, and automated scheduling. He wrote his first commercial video game at the age of 15.

Jade Raymond, Producer and Developer
Raymond is a seasoned product manager and producer with a knack for developing fun and addictive online entertainment. She has more than five years of experience leading the development of large-scale "triple A" games and persistent worlds. Before joining the marketing team at There as a Product Manager, Jade was the Producer of The Sims Online, where she was directly responsible for all design and implementation of online game features and content for EA's highest revenue generating wholly owned property. As Producer of EA's most anticipated new product, Jade managed and led multiple teams of producers, artists, engineers, designers and testers, overseeing development and ensuring The Sims's Online's place in the media. Previous to The Sims Online, Jade founded the first Research and Development group within Sony Online. Her team was responsible for leveraging Sony IP across multiple platforms and ultimately building Sony Online's most trafficked offerings: the entire suite of Jeopardy games played by over 3,000 simultaneous users on a daily basis. Prior to Sony Online, Jade Raymond developed the first ever massively multi-user 3D shopping experience as part of a special project for Microsoft's advanced research group and participated in the creation of best selling entertainment titles for both IBM and Crayola.

January 15, 2004
The SIGGRAPH 2003 Electronic Theatre
by Silicon Valley ACM SIGGRAPH

Video presentation of cutting edge computer animation from the 2003 SIGGRAPH conference.

2003 Events:

November 13, 2003
Extensible 3D Graphics (X3D)
by Tony Parisi and special guests

Join X3D editor and co-author Tony Parisi and special guests for an evening discussing Extensible 3D Graphics (X3D), the new standard for web and broadcast 3D graphics content.

Tony Parisi - "If you have to ask the question..."
President, Media Machines, Inc.
Tony will provide 10-minute overview of X3D, description of the features, state of the practice, and a look to the future.

Tony Parisi is a technology pioneer and accomplished entrepreneur at the forefront of Internet New Media. Tony is co-creator of the Virtual Reality Modeling Language (VRML), the ISO standard for 3D graphics on the World Wide Web, and is widely recognized as an expert in standards, technologies and emerging markets for interactive rich media. In 1995 Tony founded Intervista Software, an early innovator in real-time, networked 3D graphics technology and developed WorldView™, the first real-time VRML viewer for Microsoft Windows. In 1998 Intervista was purchased by PLATINUM technology, inc. and Tony joined the company to lead business affairs for its 3D visualization group.

In 1999 Tony founded Media Machines to provided business planning, business development and technology consulting services to companies in the bay area and worldwide. Tony is spearheading the development of Flux™ , a real-time 3D technology that continues to push the envelope in interactive graphics for the web. Tony is also a lead editor and co-chair of the Extensible 3D (X3D) Specification, the new standard for Web3D graphics being developed by the Web3D Consortium.

Alan Hudson - The Xj3D Browser
President, Yumetech, Inc.
Alan will give a tour of Xj3D, Yumetech's cross-platform X3D browser and application toolkit written in Java.

Alan has been involved in virtual reality systems for the past 7 years. He currently leads the Open Source task group of the Web3D Consortium, and is a co-author of a book on the Java programming language. His previous projects include the development of immersive training environments, an online library publishing and automation system, and software for collaborative manufacturing design.

David Arendash - Unreal->X3D Converter
Multi-Media Development Engineer, The ManyOne Network
David will show how game-quality spaces can be authored in the Unreal level editor and easily converted to web games running in X3D.

David Arendash is a multimedia design engineer, responsible for getting artistic vision into practical implementation. Mr. Arendash began designing digital hardware at age of thirteen, and has been programming computers since age fifteen. Prior to joining ManyOne Networks, Mr. Arendash ran an independent Development Contractor firm, Quantum Leap Computing, where his clients included Xerox, StoryVision, Strategic Mapping, and Ziff-Davis/PC-Labs. Quantum Leap Computing's areas of focus were on multi-player online card games using Flash5 & PHP and on the full development of many 2D, 3D, and game development utilities including modifications, add-ons, and levels for Unreal/Tournament Engines. Previously, Mr. Arendash was responsible for inter-team middle-ware library and tools at Intuit for over four years. Prior to this, he worked in development and engineering capacities for companies including Software Publishing in Mountain View, California, DEST Corporation in Milpitas, California, Allen-Bradley Company, in Highland Heights, Ohio, and Wang Laboratories in Lowell, Massachusetts. David received his B.S. in Computer Engineering from Case Western Reserve University in 1984.

Keith Victor - Vizx3D
President, Virtock Technologies, Inc.
Keith will demonstrate Vizx3D, an authoring tool for X3D and VRML. Keith will be presenting some content created using Vizx3D, and demonstrating how complex features, such as H-Anim Avatars and MultiTexturing, can quickly be authored.

After graduating with a BS and MS in Mechanical Engineering at the University of Wisconsin-Madison, Keith began his career as a project Engineer for Delco Products Division of GM back in 1989. In 1993, Keith left GM to work for SDRC, a provider of CAD software. In 1999, Keith left SDRC to found Virtock Technologies, Inc., where he released Spazz3D, an early entry into the VRML tool market. Keith sold Spazz3D to Eyematic Interfaces, Inc., where he worked as a Software Engineer until 2003. Keith was able to regain the IP for Spazz3D when Eyematic ceased, thus he revived Virtock Technologies, Inc. with the upcoming release of Vizx3D, an authoring tool for X3D and VRML.

David Petchey - Planet9 Projects
Software Architect, Planet 9 Studios
David will highlight X3D projects and products developed by Planet9 studios, including eSCENE, a 3D Command and Control interface to aid in the real time management of terrorist incidents, accidents and training scenarios.

David has 20 years of experience as a programmer and software project leader covering a wide range of disciplines, with particular expertise in multimedia Windows development. David has designed, implemented, debugged, and shipped numerous products primarily for the PC and can take any software project through its full development cycle. David has worked for large corporations such as Microsoft and Mindscape as well as co-founding and consulting to several startups.

October 16, 2003
SIGGRAPH 2003 Technical Papers Review and Discussion
by Silicon Valley ACM SIGGRAPH

This is a discussion-style meeting (as opposed to our normal presentation style), during which we will discuss the technological advancements represented by the technical papers presented at SIGGRAPH 2003. Participation is encouraged. It is highly recommended that you bring your SIGGRAPH Proceedings.

Because of the participatory nature of this meeting, there will be no charge for attending this meeting. However, we will be ordering a pizza and donations will be solicited to cover its cost.

October 9, 2003
Forbidden Animation
by Karl F. Cohen

Are you offended by drawings of cows with udders? Seventy years ago censors in the United States decided if you show a cow in an animated cartoon it had better be wearing a dress to cover those teats.

Animation has been censored for a variety of reason including being too risqué, too violent, and for showing racial stereotypes and possibly for having subversive content. Forbidden Animation is a film program that explores images and words that were banned when formal censorship was established in America in 1934. The program, based on Karl Cohen's book "Forbidden Animation: Censored Cartoons and Blacklisted Animators in America", illustrates what was once considered acceptable, but was then outlawed by a document called the Production Code. From 1934 to 1968 it was enforced by men with scissors who took their jobs seriously.

Karl F. Cohen teaches animation history at San Francisco State University and is president of ASIFA-San Francisco, a chapter of the international animation association. Prior to SFSU, he has taught at Toledo University and was the curator at the Toledo Museum of Art. He had produced various commercial and personal film projects. He has been a guest presenter at various film festivals in Europe, Israel, Canada, and the United States.

September 25, 2003
The OpenGL Shading Language
by Jon Leech

The recent trend in graphics hardware has been to replace fixed functionality with programmability in areas that have grown exceedingly complex (e.g., vertex processing and fragment processing). The OpenGL Shading Language has been designed to allow application programmers to express the processing that occurs at those programmable points of the OpenGL pipeline. A desire to expose the extended capability of the hardware has resulted in a vast number of extensions being written and an unfortunate consequence of this is to reduce, or even eliminate, the portability of applications, thereby undermining one of the key motivating factors for OpenGL. A natural way of taming this complexity and the proliferation of extensions is to allow parts of the pipeline to be replaced by user programmable stages. This has been done in some recent extensions but the programming is done in assembler, which is a direct expression of today's hardware and not forward looking. Mainstream programmers have progressed from assembler to high-level languages to gain productivity, portability and ease of use. These goals are equally applicable to programming shaders. The goal of this work is a forward looking hardware independent high-level language that is easy to use and powerful enough to stand the test of time and drastically reduce the need for extensions. These desires must be tempered by the need for fast implementations within a generation or two of hardware.

Jon Leech is the technical lead of the OpenGL engineering group at Silicon Graphics. As Secretary of the OpenGL Architecture Review Board, he has led the group since 1997 and edited the OpenGL 1.2 - 1.5 API Specifications. He also participates in related standards groups including the OpenGL ES working group of the Khronos SIG, and a new joint effort with Sun Microsystems to standardize Java bindings to OpenGL.

Prior to joining SGI, Jon did research in interactively steered molecular modelling and highly parallel computer graphics architectures at the University of North Carolina at Chapel Hill. He has a M.S. in Computer Science from the California Institute of Technology.

July 27-31, 2003
SIGGRAPH 2003 Conference in San Diego
by ACM SIGGRAPH

Diane E. Shapiro created the article "A Foray into Experimentation, Illusion and Surreal Expression at the SIGGRAPH 2003 Art Gallery" for the Silicon Valley SIGGRAPH Chapter

Here is a link to the article:
http://silicon-valley.siggraph.org/MeetingNotes/ArtGallery2003/siggraph2003.htm

June 19, 2003
Autocad 2004
by Lynn Allen

Ask 10 CAD managers to define "CAD standards," and you will probably get 10 different answers. Although most standards include some sort of layering scheme, most offices also include guidelines for plotting, file and directory naming, as well as many other areas that affect the quality of CAD output.

Modern CAD systems, such as AutoCADR 2004 software, are extraordinarily flexible and can be tailored to fit just about any workflow. This flexibility comes at a price, however; different users can use vastly differing methods to produce a drawing, with different visual results. As a CAD manager you need to establish standards to ensure that your firm's output is of a consistent high quality, no matter who produced a particular drawing, or when.

The concept of ensuring consistent output from a design or drawing office is not new. Even in the days of the drawing board, responsible firms followed design and drafting guidelines that specified standards, for instance, sheet sizes and scale, text and dimension sizes, and styles. Modern technology has introduced many new ways to produce drawings. Although computers and CAD systems have brought many productivity benefits, they have to be carefully managed to produce the desired results.

For more information, go to the following link:

http://www3.autodesk.com/adsk/files/2704184_AutoCAD2004_CAD_Stds.pdf

Lynn Allen, CADENCE columnist and worldwide Autodesk Technical Evangelist, speaks to more than 15,000 users each year. For the past eight years she has written a monthly column in CADENCE magazine called "Circles and Lines." Lynn started using AutoCADR software with Release 1.4, over 16 years ago, and has taught at the corporate and collegiate level for 13 years. A sought-after public speaker with a unique comedic style, Lynn is always one of the highest rated speakers at Autodesk UniversityR. Her latest writing endeavor is AutoCAD 2002 Inside and Out.

For more information on Lynn, please visit www.autodesk.com/lynnallen

May 22, 2003
SIGGRAPH Paper Presentations
by Eran Guendelman and Ren Ng

Nonconvex Rigid Bodies with Stacking
by Eran Guendelman

In this talk, we will present our SIGGRAPH 2003 paper "Nonconvex Rigid Bodies with Stacking". We consider the simulation of nonconvex rigid bodies focusing on interactions such as collision, contact, friction (kinetic, static, rolling and spinning) and stacking. We advocate representing the geometry with both a triangulated surface and a signed distance function defined on a grid, and this dual representation is shown to have many advantages. We propose a novel approach to time integration merging it with the collision and contact processing algorithms in a fashion that obviates the need for ad hoc threshold velocities. We show that this approach matches the theoretical solution for blocks sliding and stopping on inclined planes with friction. We also present a new shock propagation algorithm that allows for efficient use of the propagation (as opposed to the simultaneous) method for treating contact. These new techniques are demonstrated on a variety of problems ranging from simple test cases to stacking problems with as many as 1000 nonconvex rigid bodies with friction.

Eran Guendelman is a Computer Science PhD student at Stanford University. Working under Ron Fedkiw, Eran's research has focused on physics-based modeling for computer graphics.

All-Frequency Shadows Using Non-linear Wavelet Lighting Approximation
by Ren Ng

Ren's talk will be a preview of his SIGGRAPH 2003 paper, titled "All-Frequency Shadows Using Non-linear Wavelet Lighting Approximation." This work was done with Ravi Ramamoorthi at Columbia and Pat Hanrahan at Stanford. This paper describes real-time rendering of objects under all-frequency illumination represented by high-resolution environment maps. Current techniques are limited to small area lights, with sharp shadows, or large low-frequency lights, with very soft shadows. Our main contribution is to approximate the environment map in a wavelet basis, keeping only the largest terms (this is known as a non-linear approximation). We obtain further compression by encoding the light transport matrix sparsely but accurately in the same basis. Rendering is performed by multiplying a sparse light vector by a sparse transport matrix, which is very fast. For accurate rendering, using non-linear wavelets is an order of magnitude faster than using linear spherical harmonics, the current best technique.

Ren Ng is a first year PhD Computer Science student at Stanford, working with Pat Hanrahan at the Stanford Computer Graphics Lab. His current research focuses on real-time relighting with detailed natural lighting, and will be the focus of his talk tonight. He has also published papers on virtualizing graphics hardware in support of real-time shading languages, and on scalable interactive volume rendering using clusters of rendering servers. Ren is actively exploring career options in industry research, graphics hardware, movie making and academia.

April 24, 2003
Anark Studio
by Kevin McGill

Anark Corporation is setting a new standard for interactive digital media performance and visual quality with the Anark Media Platform, the industry's first truly integrated multimedia platform.

Providing an unparalleled experience for the end-user, this platform enables artists and multimedia developers, to create visually stunning content that integrates real-time 3D and 2D graphics, video, audio and data into a stunning interactive experience using drag-and-drop based effects and an easy to use time line based authoring environment.

Featuring unrivaled flexibility, control, and cost-effectiveness, Anark Studio™ empowers developers and artists to easily author and re-purpose content into captivating broadcast-quality presentations in a unique layered media environment --- and to create dazzling content for CBT/eLearning, kiosks, interactive advertising, Web sites and other interactive applications. These broadcast-quality presentations can be delivered via CD-ROM, intranets, kiosks the Web, and other digital media outlets.

The Anark Media Platform™ currently consists of Anark Studio™ for content authoring and Anark Client™, a free plug-in used for real-time display of 3D and 2D graphics, video, audio, and data - all in one interactive, broadcast-quality presentation. Additional Anark Media Platform software applications are under development, and will enable superior content creation, delivery and management capabilities.

Anark has brought together experts from the 3D animation, graphics and streaming media arenas to create and build this new interactive multimedia platform. The company has signed partnerships and customers with major industry leaders that are adopting the platform as a means of creating and delivering stunning broadcast-quality media that has never been seen before. Anark is a privately funded company that is rapidly gaining recognition as a leader within the interactive multimedia arena.

Kevin McGill's background is in the digital video and multimedia industry, including DVD authoring, MPEG encoding, video editing, and video streaming markets. Currently, he is responsible for evangelizing Anark Studio to various groups around the country and working with customers who want to incorporate interactive multimedia for their own internal purposes or to create presentations for their customers. He has demonstrated the technology at a number of trade shows including NAB, Comdex, and Siggraph.

March 20, 2003
The SIGGRAPH 2002 Electronic Theatre
by Silicon Valley ACM SIGGRAPH

Video presentation of cutting edge computer animation from the 2002 SIGGRAPH conference

March 7, 2003
Defining New Worlds of Gaming: New Reality Simulation and the Expanding Gaming World
by Robert Bridson, Ron Fedkiw, and Craig Slagel,

Ron Fedkiw, Consultant Industrial Light & Magic.
New Reality in Gaming Through Simulation Simulation adds visual richness to virtual environments. This talk will briefly address some recent breakthroughs in the simulation of rigid bodies, cloth and flesh for both feature films and biomechanics simulations. Possible extensions to video games will be discussed. We will also briefly touch on the simulation of smoke, water, and fire.

Fedkiw received his Ph.D. in Mathematics from UCLA in 1996 and did postdoctoral studies at UCLA in Mathematics and at Caltech in Aeronautics before joining the Stanford Computer Science Department. He was awarded a Packard Foundation Fellowship, a Presidential Early Career Award for Scientists and Engineers (PECASE), an Office of Naval Research Young Investigator Program Award (ONR YIP), a Robert N. Noyce Family Faculty Scholarship, and two distinguished teaching awards. Currently he is on the editorial board of the Journal of Scientific Computing and IEEE Transactions on Visualization and Computer Graphics, and participates in the reviewing process of a number of journals and funding agencies. He has published approximately 40 research papers in computational physics, computer graphics and vision, as well as a new book on level set methods. For the past two years, he has been a consultant with Industrial Light & Magic.

Craig Slagel, Senior World Wide Graphics Trainer, Electronic Arts
Preparing for the Future of Games Over the last few years, we have seen the Playstation 2, Xbox, GameCube and the PC enable us to improve the visual and interactive quality of games. With the next generation of consoles only a few years away, we need to prepare for another leap in technology and know how to use it effectively. This talk will discuss how we can prepare for the next level of interactive entertainment.

Craig Slagel is a Senior World Wide Graphics Trainer for Electronic Arts in Redwood Shores. He is responsible for development and delivery of training at EA studios worldwide. Craig has over 8 years experience in training and production

February 13, 2003
Pliable Display Technology in Creative Software
by Dr. David Baar, Ph.D.

Pliable Display Technology by IDELIX has emerged as a powerful and very general framework for providing detail-in-context capabilities within software applications in areas such as creative graphics and video editing. PDT provides software developers with the means to provide enhanced data viewing capabilities, as well as unique in-place data editing and interaction operations via its unique Undisplace function. Through a series of demonstration applications into which PDT has been integrated, core capabilities of PDT for graphical applications will be illustrated. Novel PDT lens-assisted operations such as lens-assisted area selection, cropping, and precise object positioning will be demonstrated. PDT operations on raster, vector, text, and multilayer data will be shown, as well as recent developments in PDT3d for occlusion reduction in 3D. Dr. Baar will also comment on recent user studies comparing PDT against other techniques for performing steering tasks, and on end-user feedback from production use of PDT.

Attendees are encouraged to download PDT technical information and executable demo's from the IDELIX website per the following links:

Dr. David Baar, Ph.D. (Physics), M.Sc. (Eng.), is the Chief Technical Officer (CTO) and founder of IDELIX Software Inc. His research interests in software for numerical modeling and image processing led to the formation of IDELIX. Visionary and pragmatic, Dr. Baar has built a team of leading experts in information visualization to develop the company's technology. Dr. Baar completed his graduate research at Queen's University in 1990 with a doctoral thesis in the quantum magnetic properties of superconductors. His post-doctoral work included research at ISTEC in Tokyo, at the University of British Columbia, and at Stanford University. One of the core components of this research was the numerical modeling of complex experiments and materials. He has also worked on image processing software for analyzing data from the Landsat satellites, and numerous other software and research projects. In parallel with his interests in a wide variety of technologies and research areas, Dr. Baar is involved in numerous recreational pursuits such as windsurfing, rock climbing, cross-country skiing, hiking, and photography.

January 16, 2003
Shake: The Industry Standard Compositing Solution for Film and HD
by Charles Meyer

Shake brings a powerful new dimension to Apple's pro film and video lineup and is designed for the most demanding creative professionals, with the speed, quality, and scalability required for creating high-resolution effects in HDTV, film, and IMAX projects. Its reputation in the industry speaks for itself. Shake was used on the last five Academy Award Winners for Best Visual Effects and is taught in most major film schools worldwide. Just as Final Cut Pro has changed the way the industry edits, Shake has changed high-end post production forever by introducing an affordable compositing and visual effects software solution for film and video professionals.

After studying in Creative arts and going through professional training to become a 3D Animator, Charles Meyer took on a Training position at the NAD Center to provide professional and corporate training in the following disciplines: System Administration, 3D Animation, Motion Capture, Video Editing and Image Compositing. He holds many certifications and is currently working at Apple Computer as a system engineer providing technical and sales support for the pro Film and Video reseller channel. He also provides product demonstrations and seminars for Final Cut Pro, Cinema Tools, Shake and DVD Studio Pro.

2002 Events:

November 21, 2002
Programmable Graphics Hardware
by Tim Purcell and Keven Bjorke

Tonight's presentation will cover the new programablity of graphics hardware. The first part of the presentation will be given by Kevin Bjorke. Kevin will introuce CG, a new programing language from nVIDIA which should make programming of graphics hardware easier. The second speaker will be Tim Purcell. Tim will demonstrate what can be done with the new class of graphics hardware. Tim will also present his recent research on implementing ray-tracing with programable graphics hardware.

Keven Bjorke is a shading engineer at nVIDIA, specializing in new shading algorithms using CG and new chip architectures. Previously, he was the imaging and lighting supervisor for "Final Fantasy: The Spirits Within" and also supervised shading architecture for the about-to-be-released "AniMatrix" short "Last Flight of the Osiris." Prior to that, he was a TD and layout artist on "A Bug's Life" and "Toy Story" and has created numerous film effects, tv commercials, theme park rides, and so forth back through the 1980's. He has been involved with shading languages and shade tree architectures back to 1986 or so. This was well before the advent of the first version of RenderMan.

Tim Purcell is a Ph.D. student in the Computer Science Department at Stanford University. His research interests include stream programming, ray tracing, and leveraging GPUs for non-traditional uses. He was a recipient of the National Science Foundation Graduate Research Fellowship and is an 2002-03 nVIDIA Fellowship winner. He received a B.S. in Computer Science from the University of Utah in 1998 and a M.S. in Computer Science from Stanford University in 2001.

October 15, 02
Vision-Realistic Rendering
by Brian A. Barsky

We introduce a new concept, called "vision-realistic rendering", a three-dimensional rendering algorithm that simulates the vision of a subject whose optical system is measured using wavefront aberrometry.

The algorithm uses an input depth map to stratify an initial image into disjoint depth plane images, extends these depth plane images, convolves them with a special object-space blur filter, and composites them to form a final vision-realistic rendered image.

Vision-realistic rendering has many applications in optometry and ophthalmology. Such images could be shown to a patient's eye doctor to convey the specific visual anomalies of the patient. Also, images could be generated using the optics of various ocular conditions, which would be valuable in educating doctors and patients about the specific visual effects of these vision disorders. Furthermore, with the increasing popularity of vision correction surgeries such as PRK (photorefractive keratectomy) and LASIK (laser in-situ keratomileusis), our technique could be used to convey to doctors what the vision of a patient is like before and after surgery, using wavefront aberrometry measured pre- and post-operatively.

In addition, by using modeled or simulated wavefront measurements, this approach could provide accurate and revealing medical visualizations of predicted visual acuity and of simulated vision; such simulations could be shown to potential candidates for such surgery to enable them to make more educated decisions regarding undergoing the procedure. Vision-realistic rendering also has applications in image synthesis and computer animation. It is important to note that the creation of such images based on camera optics follows as a special case of our algorithm. Thus, our approach could be used as a post-process to simulate camera model effects such as depth of field in the generation of synthetic images and computer animation.

Brian A. Barsky is Professor of Computer Science and Affiliate Professor of Optometry and Vision Science at the University of California at Berkeley. He is a member of the Bioengineering Graduate Group, an interdisciplinary and inter-campus program, between UC Berkeley and UC San Francisco. He has been a Distinguished Visitor at the School of Computing at the National University of Singapore in Singapore, and a Visiting Professor of Computer Science at The Hong Kong University of Science and Technology in Hong Kong, at the University of Otago in Dunedin, New Zealand, in the Modélisation Géométrique et Infographie Interactive group at l'Institut de Recherche en Informatique de Nantes and l'Ecole Centrale de Nantes, in Nantes, at the University of Toronto in Toronto, an Attaché de Recherche Invité at the Laboratoire Image of l'Ecole Nationale Supérieure des Télécommunications in Paris, and a visiting researcher with the Computer Aided Design and Manufacturing Group at the Sentralinsitutt for Industriell Forskning (Central Institute for Industrial Research) in Oslo. He attended McGill University in Montréal, where he received a D.C.S. in engineering and a B.Sc. in mathematics and computer science. He studied computer graphics and computer science at Cornell University in Ithaca, where he earned an M.S. degree. His Ph.D. degree is in computer science from the University of Utah in Salt Lake City. He is a Fellow of the American Academy of Optometry. He is a co-author of the book An Introduction to Splines for Use in Computer Graphics and Geometric Modeling, co-editor of the book Making Them Move: Mechanics, Control, and Animation of Articulated Figures, and author of the book Computer Graphics and Geometric Modeling Using Beta-splines. He has published 100 technical articles in this field and has been a speaker at many international meetings. Dr. Barsky was a recipient of an IBM Faculty Development Award and a National Science Foundation Presidential Young Investigator Award. He is an area editor for the journal Graphical Models and the editor of the Computer Graphics and Geometric Modeling series of Morgan Kaufmann Publishers, Inc. He was the Technical Program Committee Chair for the Association for Computing Machinery / SIGGRAPH '85 conference. His research interests include computer aided geometric design and modeling, interactive three-dimensional computer graphics, visualization in scientific computing, computer aided cornea modeling and visualization, medical imaging, and virtual environments for surgical simulation. He has been working in spline curve/surface representation and their applications in computer graphics and geometric modeling for many years. He is applying his knowledge of curve/surface representations as well as his computer graphics experience to improving videokeratography and corneal topographic mapping, forming a mathematical model of the cornea, providing computer visualization of patients' corneas to clinicians, and developing new techniques for contact lens design and fabrication. This research forms the OPTICAL (OPtics and Topography Involving Cornea and Lens) project.

September 19, 2002
Light Field Mapping: Efficient Representation and Hardware Rendering of Surface Light Fields
by Radek Grzeszczuk

A light field parameterized on the surface offers a natural and intuitive description of the view-dependent appearance of scenes with complex reflectance properties. To enable the use of surface light fields in real-time rendering we develop a compact representation suitable for an accelerated graphics pipeline. In this talk, we present how approximate the light field data by partitioning it over elementary surface primitives and factorizing each part into a small set of lower-dimensional functions. We show that our representation can be further compressed using standard image compression techniques leading to extremely compact data sets that are up to four orders of magnitude smaller than the input data. Finally, we describe an image-based rendering method, light field mapping, which can visualize surface light fields directly from this compact representation at interactive frame rates on a personal computer. We demonstrate the results for a variety of non-trivial synthetic scenes and physical objects scanned through 3D photography.

Radek Grzeszczuk joined Intel in 1998 as a Senior Researcher. He received his Ph.D. degree (1998) and his M.Sc. degree (1994) in Computer Science from University of Toronto. His PhD thesis research was done under the supervision of Demetri Terzopoulos and Geoffrey Hinton and focused on using neural networks for fast emulation and control of physics-based models. The results of this work were published at SIGGRAPH'98 and NIPS'98. Radek Grzeszczuk's pioneering research with Steven Gortler, Michael Cohen and Richard Szeliski at Microsoft Research Graphics Group on image-based rendering culminated in the publication of "The Lumigraph'' at SIGGRAPH'96. His recent work on image-based modeling and rendering focuses on methods for efficient representation and visualization of complex shape and reflectance properties of objects. He published a number of important scientific papers, primarily in computer graphics, but also in artificial life, neural networks, and computer vision. He received an award in 1995 from Ars Electronica, the premier competition for creative work with digital media, for his work on artificial animals for computer animation and virtual reality.

September 12, 2002
SIGGRAPH 2002 Technical Papers Review and Discussion
by Silicon Valley ACM SIGGRAPH

This year's SIGGRAPH in San Antonio had some of the highest quality technical papers ever. You are being invited to share and learn about those papers. It is sometimes difficult to attend all of the paper presentations, so here is your chance to hear of the exciting papers that others have seen.

This month, on September 12, we will be having a special, interactive and participatory meeting to discuss the technical papers presented at SIGGRAPH.

There will be no charge to attend this event; however, we will be collecting donations to defray the cost of pizza and drinks (about $4-5 each).

June 20, 2002
Eyematic facial animation for 3D models
by Rob Polevoi and Larry McDonough

Creating facial animation for 3D models has been an expensive, time-consuming task requiring sophisticated skill-sets. Considerable efforts have gone into the development of automated methods to create facial animation, to replace the current methods used in the entertainment industry and to open new markets for non-professional users in toys, games and communications. Eyematic Interfaces has applied its research in computer vision and software facial recognition to create a completely noninvasive automated solution for facial animation. It's FaceStation software can analyze a video segment containing the face of a human actor and extract facial movement data, which is then used to drive an animation based on a set of standard morph targets. No physical markers need be applied to the actor's face because the software can recognize key facial features and landmarks automatically. Polevoi will review the basic problems of facial animation, the range of available automated tools, the specific details of FaceStation, the future direction for this product and automated facial animation.

Rob Polevoi became the Director of Developer Relations for Eyematic Interfaces, Inc. in 2000. Previously, Polevoi was Assistant Professorfor the Computer and Video Imaging Department at Cogswell Polytechnical College, in Sunnyvale, California. Polevoi has also published books on 3D computer graphics, "3D Studio Max R3 in Depth", "3Ds Max 4 in Depth" and "Interactive Web Graphics with Shout3D".

May 23, 2002
Capturing Motion Models for Animation
by Chris Bregler

We will survey our current research efforts on vision based capture and animation techniques applied to animals, humans, and cartoon characters. We will present new capture techniques that are able to track and infer kinematic chain and 3D non-rigid blend-shape models. Furthermore, we demonstrate how to use such motion capture data to estimate statistical models for synthesis and how to retarget motion to new characters. We show several examples on capturing kangaroos, giraffes, human body deformations, facial expressions, animating hops and dances with natural fluctuations, and retargeting expressive cartoon motion.

This reports on joint work with Kathy Pullen, Lorie Loeb, Lorenzo Torressani, Danny Yang, Gene Alexander, Erika Chuang, Hrishi Deshpande, Rahul Gupta, Aaron Hertzmann, and Henning Biermann.

Chris Bregler is an Assistant Professor of Computer Science at Stanford University since 1998. Bregler received his M.S. and Ph.D. in Computer Science from U.C. Berkeley in 1995 and 1998, and his Diplom from Karlsruhe University in 1993. He also worked for several companies including IBM, Hewlett Packard, Interval, and Disney Feature Animation. He is a member of the Stanford Computer Graphics Lab and founded the Stanford Movement Research Group, which does research in Vision and Graphics with a focus on Motion Capture, human face, speech, and body movement analysis and synthesis, and artistic aspects of animation.

April 18, 2002
Furry Pixels: Dynamics in Monsters, Inc.
by David Baraff

Dynamic simulation had a major role in shaping the final look of Monsters, Inc.'s main characters, but from the start, it was set in stone that simulation could only be used if it didn't interfere in Pixar's traditional creative process. This talk gives a candid behind-the-scenes look at the core simulation technologies employed to create Monsters, Inc., describes the balancing of creative and technical needs due to simulation, and reveals the difficult effects that were easy, and the simple shots that were hard.

David Baraff joined Pixar Animation Studios in 1998 as a Senior Animation Scientist in Pixar's research and development group. Prior to his arrival at Pixar, he was an Associate Professor of Robotics, and Computer Science at Carnegie Mellon University. David Baraff received his Ph.D. in computer science from Cornell University in 1992, and his Bs.E. in computer science from the University of Pennsylvania in 1987. Before and during his graduate studies, he also worked at Bell Laboratories' Computer Technology Research Laboratory doing computer graphics research, including real-time 3D interactive animation and games. In 1992, he joined the faculty of Carnegie Mellon University. In 1995, he was named an ONR Young Investigator. His research interests include physical simulation and modeling for computer graphics and animation.

March 21, 2002
How To Create a Computer Animated Short Film on a Desktop PC
by Lee Lanier

Lee Lanier will talk about creating a computer-animated short film on a desktop PC - taking it from concept to the film festival circuit.

Lanier began his career as a Script Supervisor in Los Angeles, spending many hours on televison commercials, movies-of-the-week and feature films. In 1994, Lanier shifted his career to computer animation while at Walt Disney's Buena Vista Visual Effects. In 1996, Lanier joined Pacific Data Images in Palo Alto as a Lead Modeler and Lead Lighter in Antz and Shrek. In Lee's spare time, he managed to produce, direct and animate two award-winning short films, "Millennium Bug" and "Mirror". Lee is currently developing on a third short, "Day Off the Dead". Lee Lanier is currently working independently and collaborating on 2D and 3D animation projects.

Presentation notes

February 28, 2002
nVidia's role in the development of the graphics system used by the Microsoft Xbox
by Tony Tamasi

Tony Tamasi will be discussing nVidia's role in the development of the graphics system used by the Microsoft Xbox.

Tony Tamasi, senior director of desktop product management, has more than eight years of graphics industry experience. In his current role, he is responsible for managing the strategic direction, definition and development of the company's desktop graphics products. Mr. Tamasi serves as the primary interface between customers, developers and the internal product development team. Prior to joining NVIDIA, Mr. Tamasi was director of product marketing at 3Dfx Interactive, Inc. (San Jose, CA) and also held systems engineering roles at Silicon Graphics (Mountain View, CA) and Apple (Cupertino, CA), where he also spent time as the graphics technology evangelist. He holds four degrees with honors, including Business, Political Science, History and Computer Science, from the University of Kansas.

January 17, 2002
3D on The Web, 3D-Online.com
by Mitch Williams

3D graphics is emerging onto the Internet as an engaging, interactive media.  Its business applications include product demonstrations, web site design, interactive ads, data visualization and training.  In entertainment, interactive web 3D is used for games and multi-user 3D worlds.  Web 3D may go well beyond just applications and become the next interface for the future internet, replacing the 2D graphical interface with a navigational spatial user interface.

This session will take an independent look at the technologies, applications, issues and opportunities for Interactive Web 3D, and will provide guidance and perspective on its future. We will examine web 3D from the point-of-view of customers, developers, educators and management, analyzing their perceptions, misconceptions, issues and solutions. Finally, we will examine the parallels of the emerging Web 3D medium with the evolution of other new technologies and unique content.

Mitch Williams is President of 3D-Online.com, industry analysts, consultants and developers of Interactive Web 3D. At SIGGRAPH 2001, he spoke on “Converting Your 3D Models and Scenes to Interactive Web 3D”.  Mitch also instructs all the Web 3D art and engineering courses at UCLA, UC Berkeley, UC Irvine and UC Santa Cruz-Silicon Valley Extension.

Previously, Mr. Williams was Manager of Software on children’s educational CD-ROM titles including “Math Blaster” and Program Manager and Software Engineer for Xerox working with their Palo Alto Research Center (PARC).

2001 Events:

November 15, 2001
A Brief Tour Through the World of Online Gaming (Shockwave.com)
by John Welch

A few short years ago, video gamers were just teenage, male, propeller-hat-wearing nerds. Today, they are office workers and homemakers, boys and girls, children, adults and grandparents, mainstream consumers· and, of course, propeller-hat-wearing nerds. What has caused this shift, and what opportunity does it bring?

Rewinding a bit, what is Œonline gaming©, where did it come from, and where is it going? Who are the players? What are the popular trends in game design and technologies? Are there any reigning business models? Is the Internet dead? Or, maybe, is retail, as we know it, the corpse just waiting to fall?

While overshadowed by the hype of the next-generation console launches and the dot-bomb, online gaming remains an area of intense interest among developers, publishers, retailers, investors, analysts, portals, independent online entertainment destinations, and massive media conglomerates.

This presentation will survey many of the major issues of the day in online gaming. It will specifically focus in on what has/has not worked in terms of the marriage of business model, market, and game design. Audience members are assumed to be somewhat conversant in games industry topics. The goal is to deepen their knowledge about the online aspects of the games industry, to identify opportunities and unanswered questions, and to throw in a few wild predictions just to keep things interesting.

John Welch, Vice President of Games and Product Development at AtomShockwave Corp., is responsible for acquiring, developing and promoting interactive games and entertainment for the Shockwave.com brand. John is driven by the purpose of raising gaming to greater mass-market appeal and commercial success, specifically via online Brand Entertainment and direct-to-consumer product sales. John joined AtomShockwave, formerly known as Shockwave.com, in the Fall of 1999, where he has been instrumental in acquiring and defining many of the games now available at www.shockwave.com. Shockwave.com acquired AtomFilms on January 15, 2001, resulting in a combined entertainment powerhouse. Prior to Shockwave.com and AtomShockwave, John was co-founder of Twofish Technology, an enterprise software product and services company. John was later hired by Sega.com as the Director of Product Development in charge of the functional design of the Dreamcast multiplayer gaming network.

John received a Bachelor of Science in Mathematics with Computer Science from the Massachusetts Institute of Technology, and a Masters of Science in Computer Science from the University of Massachusetts.

October 18, 2001
The SIGGRAPH 2001 Electronic Theatre
by Silicon Valley ACM SIGGRAPH

Video presentation of cutting edge computer animation from the 2001 SIGGRAPH conference

September 20, 2001
SIGGRAPH 2001 Technical Papers Review and Discussion
by Silicon Valley ACM SIGGRAPH

An interactive group discussion of papers presented at the SIGGRAPH 2001 conference.

June 1, 2001
A Signal-Processing Framework for Forward and Inverse Rendering
by Ravi Ramamoorthi and Pat Hanrahan

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/InvRend.html

May 2001
Adobe Atmosphere
by Adobe Systems Inc.

A presentation of the Adobe Atmosphere product. For more information about this product, see the product page here:
http://www.adobe.com/products/atmosphere/main.html

March 27, 2001
Visual Effects in Shrek
by Juan Buhler, Jonathan Gibbs, Scott Peterson, Scott Singer

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Shrek.html

February 15, 2001
The Digital Michelangelo Project
by Szymon Rusinkiewicz and Henrik Wann Jensen

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/DigMich.html

January 2001
The SIGGRAPH 2000 Electronic Theatre
by Silicon Valley ACM SIGGRAPH

Video presentation of cutting edge computer animation from the 2000 SIGGRAPH conference

2000 Events:

November 29, 2000
The Making of Disney's Dinosaur
by Mike Belzer, Jim Hillin, Sean Phillips, and Jay Sloat

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Dinosaur.html

October 19, 2000
3D Graphics on the Web
by Abe Megahed, Leo Hourvitz, Eric Anchutz, and Peter Broadwell

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/3DWeb.html

September 21, 2000
The Story of Computer Graphics
by ACM SIGGRAPH

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Story.html

August 31, 2000
SIGGRAPH 2000 Technical Paper Review
by Silicon Valley ACM SIGGRAPH

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/S2000papers.html

July 31, 2000
SIGGRAPH 2000 International Conference in New Orleans
by ACM SIGGRAPH

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/sig2000.html

May 2000
Immersion in Virtual Worlds
by Dr. Ken Perlin

abstract forthcoming.

Here is a link to Dr. Ken Perlin's web site:
http://www.kenperlin.com

May 2000
3D Design and Animation Conference (Joint event with SF SIGGRAPH)
ILM : Effects from Mission to Mars
by SF SIGGRAPH and ILM

abstract forthcoming.

Here is a link to SF SIGGRAPH's web site:
http://www.siggraph.org/chapters/sf/

Here is a link to ILM's web site:
http://ilm.com

March 2000
Pixar: Toy Story 2
by Pixar

abstract forthcoming.

Here is a link to Pixar's web site:
http://www.pixar.com

February 2000
Introduction to the Sony Playstation 2
by Dominic Mallinson

There are some meeting notes for this event recorded in the pdf file at the address:
http://silicon-valley.siggraph.org/MeetingNotes/PS2.pdf

January 2000
Canoma - Creating 3D Models from Photographs
by Robert Seidl

abstract forthcoming.

Here is a link to the Canoma web site:
http://www.canoma.com/

1999 Events:

October 1999
Java 3D
by Henry Sowizral

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/DigMich.html

September 1999
The Making of Star Wars: Episode I

abstract forthcoming

August 1999
SIGGRAPH 1999 International Conference in LA
by ACM SIGGRAPH

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Siggraph99.html

July-99
SGI - Integrated Visual Computing Architecture
by Zaheed Hussein

abstract forthcoming

June 1999
Non-linear Video Editing
by Randy Ubillos

abstract forthcoming

March-99
PDI - Animation on Antz and Forces of Nature
by PDI

abstract forthcoming.

Here is a link to the Canoma web site:
http://www.pdi.com/

April 27, 1999
Modeling Immersive Environments using Images with QuickTimeVR
by Ken Turkowski

There are some meeting notes for this event recorded at the address:
http://www.worldserver.com/tur....mersiveEnvir.990427.html

March 1999
Cognitive Modelling for Computer Graphics and Animation
by Dr. John Funge

abstract forthcoming

February 1999
Applying Traditional Animation Techniques to Computer Graphics
by Lorie Loeb

abstract forthcoming

January 1999
SIGGRAPH Video Review
by Silicon Valley ACM SIGGRAPH

Video presentation of cutting edge computer animation from the 1999 SIGGRAPH conference

1998 Events:

November 1998
Modeling Using 3D Laser Scans
by Dr. V. Krishnamurthy and D. Piturro

abstract forthcoming.

Here is a link to the Paraform web site:
http://www.paraform.com/

October 1998
Rendering with Natural Light
by Dr. Paul Debevec

abstract forthcoming.

Here is a link to Dr. Paul Debevec's web site:
http://www.debevec.org/

September 1998
APIs of the Fahrenheit Initiative
by Chris Insinger

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Fahrenheit.html

August 20, 1998
The Making of Geri's Game
by Dr. Tony DeRose

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/DeRose.html

July 21-24, 1998
The International SIGGRAPH '98 Conference in Orlando
by ACM SIGGRAPH

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Sig98/index.html

April 16, 1998
Intelligent Digital Actors
by Dr. Yotta Koga

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Koga.html

March-98
How to build a planet
by Dr. Ken Musgrave

abstract forthcoming.

Here is a link to the LA SIGGRAPH's posting about Dr. Ken Musgrave's talk:
http://la.siggraph.org/Newsletters/1997_05_13.html

February 1998
Global Illumination In and Under Trees
by Dr. Nelson Max

abstract forthcoming.

Here is a link to some information about Dr. Nelson Max:
http://www.llnl.gov/graphics/biog.html#Nelson%20Max

1997 Events:

November 20, 1997
Digital Video - Present and Future
by Robin Wilson

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/DigitalVideo.html

September 18, 1997
True 3D Displays
by Dr. Elizabeth Downing

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/3DTL.html

April 22, 1997
The Future of Flat Panel Displays
by Huang & Lewis

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/dpiX.html

March 26, 1997
Intel's MMX Technology
by Sam Wilkie

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/MMX.html

1996 Events:

November 26, 1996
The Future of Internet Games
by Yu Shen Ng

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/mpath.html

October 1996
3D Technology at Apple Computer
by Gavin Miller & Pablo Fernicola

abstract forthcoming.

1995 Events:

November 28, 1995
Pixar and Disney's Toy Story
by Rick Sayre, Ronen Barzel, Rich Quade, and Hal Hickel

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/ToyStory.html

September 26, 1995
VRML
by Frerichs and Graham

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/VRML.html

June 1995
Painting in 3D
by Andrew Beer and Maneesh Agrawala

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Paint3D.html

April 1995
The Virtual Brewery and Barcode Hotel
by Perry Hoberman

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/BreweryBarcode.html

March 28 1995
New Graphics Technologies from Apple Computer
by Frank Cassanova, Eric Chen, Dan Venolia, Pablo Fernicola and Fabio Péttinati

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/QD3DQTVR.html

January 24 1995
Interactive TV
by Alan Trerise

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/InteractiveTV.html

1994 Events:

November 22, 1994
Computer Aided Cornea Modeling
by Prof. Brian Barsky

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Cornea.html

October 25, 1994
Computer Graphics in Tomorrow's Video Games
by David Walker

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Games.html

September 27, 1994
The Inception of Computer Graphics at the U. of Utah
by John Warnock, Ed Catmull, Frank Crow, and Lance Williams

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Utah.html

June 28, 1994
InnovativeVolume Rendering using 3D Texture Mapping
by Sheng-Yih Guan & Richard Lipes Kubota

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/VolumeRendering.html

April 26, 1994
NASA Ames Research Center Tour
Video tape presented by Sam Uselton

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/NASTour.html

April 26, 1994
Live Graphics: The SailTrack System
by Tim Heidmann

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/SailTrack.html

March 22, 1994
How to get a job in Computer Graphics
by Sandra Scott, Paul Treverson, Sandra Schmidt

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/GetAJob.html

February 22, 1994
The Illustrated Internet
by Alex Deacon

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/Internet.html

January 25, 1994
JurassicPark - The Illusion of Life
by Steve Williams and Joe Letteri

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/ILM.html

1993 Events:

May 25, 1993
Broderbund's Living Books
by Mark Schlicting

There are some meeting notes for this event recorded at the address:
http://silicon-valley.siggraph.org/MeetingNotes/LivingBooks.html




Home | Next Event | Past Events | Resources | Volunteers | What's New | ACM SIGGRAPH
Copyright © 1993-2009 Silicon Valley ACM SIGGRAPH