The electromagnetic spectrum is a vast expanse of varied energies that have been with us since the origin of the universe. Despite the great range of energies in the electromagnetic spectrum, our human senses can only detect a very limited portion, a portion we call visible light. It has been only in recent history that we have created technologies that enable us to harness light and other energies. Even though we may not be able to see these other energies with the naked eye, we have employed them in our communications. Radio is one such technology and the technological lens by which we “see” radio is a radio receiver. The RF project investigates creating pictures of the overall radio activity in a particular physical locale. Using an N6841A wideband RF sensor donated by Keysight Technologies, Brett Balogh, Anıl Çamcı, Paul Murray, and Angus Forbes devised a series of projects that explore our “lived electromagnetism”, including an interactive sonification of electromagnetic activity, real-time information visualization applications, and a VR experience where common RF signals were identified in various social scenarios. These projects were shown at the opening night of VISAP’15, and presented the following day in an artist talk by Brett Balogh.
The Chicago 0,0 Riverwalk AR experience provides a novel way for users to explore historical photographs sourced from museum archives. As users walk along the Chicago River, they are instructed to use their smartphone or tablet to view these photographs alongside he current views of the city. By matching location and view orientation the Riverwalk application creates an illusion of “then and now” co-extant. This superimposition of the historical photographer’s view and the user view is the basis of educational engagement for the user and a key factor in curating the images and the narrative surrounding them, facilitating a meaningful museum experience in a public, outdoor context. The first episode of the Riverwalk AR experience focuses on a single block between N. LaSalle and Clark Streets; the site of the Eastland Disaster in 1915. The site was selected because of the importance of this historical event— the sinking of the Eastland cruise ship 100 years ago was the largest single loss of life in Chicago’s history— and because of the abundant media available in the archive including extensive photographic documentation, newspapers and film reels. The Riverwalk project is led by Geoffrey Alan Rhodes in collaboration with the Chicago History Museum; Marco Cavallo developed new technology to facilitate the creation of pulic outdoor AR experiences.
Anıl Çamci’s Distractions brings invisible and inaudible signals into the kinetic domain. By picking up the electromagnetic waves in the exhibition space, it visualizes the signals communicating with the mobile devices that are brought into this space by the visitors. Such signals, which would go unnoticed by human perception, represent some of the most prevalent sources of distraction in our everyday lives. The work comments not only on the artist’s process, which is inherently plagued with such distractions, but also on the relationship between modern audiences and exhibition spaces. Relying exclusively on digital computing techniques, such as depth imaging, signal processing, audio synthesis, and numeric milling, Distractions visualizes data, without using computer displays, through infrasound vibrations that activate a point cloud of the artist’s head. This work was first presented at the Art.CHI Inter/Action exhibition as part of ACM CHI in May 2016.
Node Kara is an audiovisual mixed reality installation created by Anıl Çamcı. It offers a body-based interaction using a 3D imaging
of the exhibition space. Node and kara are two words in Japanese that indicate causality.
While node is used to describe natural cause-and-effect relationships,
kara is used when a causality is interpreted subjectively. Although
a clear causal relationship between actions and reactions is a
staple of interaction design, the user’s subjective experience of such
relationships often trumps the designer’s predictions. By adopting
blurring both as a theme and a technique, the piece obfuscates
the causal link between an interactive artwork and its audience. The
deblurring of the audiovisual scene becomes an attracting force that
invites the viewer to unravel the underlying clarity of Node Kara.
This comes, however, at the cost of losing a broader perspective of
the work as the viewer needs to come closer to both see and hear
the work in greater detail. This work was first presented at the IEEE VR Workshop on Mixed Reality Art in March 2016.
Works by media artists tend to be evaluated in terms of either cultural or pragmatic utility. The
function of the media arts is often described by highlighting the societal contribution of creating
products of cultural enrichment, introducing tools for promoting innovation or providing the
means by which to think critically about the ethical ramifications of technology. The media arts
also are seen as having the potential ability to aid in the solving of specific scientific and
engineering problems, especially those having to do with creative representations of, interacting
with, and reasoning about data. Many media artists also characterize their own work as critical
refection on technology, embracing technology while questioning the implications of its use.
Articulating these multifaceted tensions between artistic outlooks and technical engagement in
interdisciplinary art-science projects can be complex—What is the role of the artist in research
collaborations? Many artists have wrestled with this question, but there is no clear
methodological approach to conducting media arts research in these contexts. Articles presented at VISAP’14, ArtsIT’14, SIGGRAPH’15, and recently accepted to Leonardo, investigate the role of the media artist in art-science contexts; articles written collaborations with George Legrady explore a range of issues related to presenting data visualization in public arts venues.
DigitalQuest facilitates the creation of mixed reality
applications, providing application designers with the ability to integrate custom virtual content within the real world. It supports
the creation of futuristic “scavenger hunts”, where multiple users
search for virtual objects positioned in the real world and where
each object is related to a riddle or a challenge to be solved. Each
player can compete with the other participants by finding virtual objects
and solving puzzles, thereby unlocking additional challenges.
Virtual objects are represented by animated 3D meshes locked to
a determined position in the real world. The objects are activated
when a player gets within a proximity threshold and then taps the
object on the screen of his or her mobile phone. In our demonstration
application, configurable virtual content appears, followed by a question that must be answered in order to be pass to the next
challenge. The displayed content may consist of images, video
and audio streams, graphical effects, or a text message that provides
hints on how to advance in the game. The editor makes it
easy to create puzzles that can be solved by exploring the surroundings
of the virtual object in order to discover clues and making use
of location-specific knowledge. When a participant figures out the
correct solution, he or she scores points related to the complexity
of the challenge and also unlocks remaining puzzles that cause new
virtual object to appear in the world. At the end of the event, the
player with more points wins. DigitalQuest was first presented at the IEEE VR Workshop on Mixed Reality Art in March 2016.
Monad is a networked multimedia instrument for electronic music performance. Users interact with virtual objects in a 3D graphical environment to control sound synthesis. The visual objects in Monad expand upon the idea of optical discs with the addition of interactivity, real-time synthesis parameters, and three dimensional motions. Every virtual object in a Monad performance is accessible by all partici- pants rather than being assigned to individual performers. The objects therefore act less like personal music instruments and more like shared components of a musical collaboration. Monad was developed by Cem Çakmak with Anıl Çamcı and first presented at the CHI 2016 Workshop on Music and HCI. An expanded version of the paper focused on the use of virtual environments as collaborative music spaces was published in the proceedings of NIME’16.
PORTAL relies on an audiovisual translation rather than a cross-modal mapping of performance data. This translational approach creates a poietic turbulence between the sounds and the images that make up a PORTAL performance. The audiovisual translation relies on a complex iterative feedback loop, in which both the artist and the audience evaluate the mo- mentary aesthetic hierarchies between the sounds and the images. The artist reacts to these fluctuations in hierarchy and balances the relationship between sound and visuals accordingly. Each performance begins with the introduction of such primitive elements (e.g. pure tones in the audio domain and basic geometries in the visual domain), which gradually evolve into more complex structures. Dynamic visual entities are created by the oscillographic translation of audio signals that are passed through a series of signal processors. PORTAL was developed by Gökem Özdemir with Anıl Çamcı and was presented at NIME’16.
Imaginando Macondo is a public artwork that commemorates Nobel Prize-winning author Gabriel García Márquez.
It was first showcased at the Bogota International Book Fair in April 2015 to an audience of more than 300,000 over the course of two weeks. The project, developed by George Legrady, Andres Burbano, and Angus Forbes, involved extensive collaboration between an international team of artists, designers, and programmers, including Paul Murray and Lorenzo Di Tucci of UIC. Viewers participate by submitting and classifying a photograph of their choice via a kiosk or their mobile phone. The classification is based on literary themes that occur in García Márquez’ work; and user-submitted photos appear alongside images produced by well-known Colombian photographers. An article describing the project was published in IEEE Computer Graphics & Applications in 2015.
Video granular synthesis is an experimental method for the creative reshaping of one or more video signals based on granular synthesis techniques, normally applied only to audio signals.
A wide range of creative effects are made possible through conceptualizing a video signal as being composed of a large number of “video grains.” These grains can be manipulated and maneuvered in a variety of ways, and a new video signal can then be created through a resynthesis of these altered grains. Video granular synthesis was first used in a composition by Christopher Jette, Kelland Thomas, Angus Forbes, and Javier Villegas, titled v→t→d. A description of this project was presented at the International Computer Music Conference in Athens, Greece (2014), and the piece was performed at Exploded View Microcinema (2014), as a University of Arizona Confluencenter event (2014), and again at ICMC in Denton, Texas (2015). A write-up of the approach was published in Computational Aesthetics in 2015.
Turbulent world is a time-based artwork that displays an animated atlas that changes in response to the increased deviation in world temperature over the next century. The changes are represented by visual eddies, vortices, and quakes that distort the original map. Additionally, the projected temperatures are themselves shown across the world, increasing or decreasing in size to indicate the severity of the change. The data used in the artwork was generated by a sophisticated climate model that predicts the monthly variation in surface air temperature across different regions of the world through the end of the century. A write-up of the project was presented at ISEA’15 in Vancouver, British Columbia.
Poetry Chains is a series of animated text visualizations of the poetry of Emily Dickinson, first showcased in the Hybridity and Synesthesia exhibition at Lydgalleriet, as part of the Electronic Literature Organization Festival in Bergen, Norway in 2015.
The project is inspired by Lisa Samuels’ and Jerome McGann’s reading of a seemingly whimsical fragment found in a letter written by Dickinson: “Did you ever read one of her Poems backward, because the plunge from the front overturned you?” They investigate what might it mean to interpret this question literally, asking how a reader could “release or expose the poem’s possibilities of meaning” in order to explore the ways in which language is “an interactive medium.” Poetry Chains provides a continuous, dynamic remapping of Dickinson’s poems by treating her entire corpus as a single poem. A depth-first search is used to create collocation pathways between two words within the corpus, performing a non-linear “hopscotch” (with a poetic rather than narrative destabilization). A version of the animations (with no interaction) is available online, developed by Angus Forbes and Paul Murray.
The Fluid Automata system is comprised of an
interactive fluid simulation and vector visualization
technique that can be incorporated in media arts projects. These techniques have been adapted for various configurations, including mobile applications, interactive 2D and 3D projections, and multi-touch tables, and have been presented in a number of different environments, including galleries, conferences, and a virtual reality research lab, including: Science City at Tucson Festival of Books (2013); Center for NanoScience Institute in Santa Barbara (2012); IEEE VisWeek Art Show in Providence, Rhode Island, curated by Bruce Campbell and Daniel Keefe (2011); and Questionable Utility at University of California, Santa Barbara, organized by Xárene Eskandar (2011). The techincal details of the Fluid Automata system are described in a paper presented at Computational Aesthetics in 2013; an expanded version of the paper, including a discussion of the history of artworks making use of cellular automata concepts, was published as a chapter in the 2014 Springer volume, Cellular Automata in Image Processing and Geometry, edited by Paul Rosin, Adam Adamatzky, and Xianfang Sun.
The interactive multimedia composition Annular Genealogy, created in collaboration with Kiyomitsu Odai,
explores the use of orchestrated feedback as
an organizational theme. The
composition is performed by two players, each of whom
use a separate digital interface to create and interact
with the parallel iterative processing of compositional
data in both the aural and visual domains. In the aural
domain, music is generated using a stochastic process
that sequences tones mapped to a psycho-acoustically
linear Bark scale. The timbre of these tones and the
parameters determining their sequencing are determined
from various inputs, including especially the 16-channel
output of the previous pass fed back into the system via
a set of microphones. In the visual domain, animated,
real-time graphics are generated using custom software
to create an iterative visual feedback loop. The composition brings various
layers of feedback into a cohesive compositional
experience. These feedback layers are interconnected,
but can be broadly categorized as physical feedback,
internal or digital feedback, interconnected or
networked feedback, and performative feedback. An article describing our approach was presented at
Infrequent crimes is a data visualization piece that iterates through a list of unusual, uncommon crimes that occurred in San Francisco within the last year.
The squares accompanying each type of crime indicate an incident and furthermore display the location of that crime. The longitude and latitude of each incident, gathered from San Francisco police reports, is indicated either by a map tile or an image taken from Google Maps Street View.
Infrequent Crimes was part of the Super Santa Barbara 2011 exhibition at Contemporary Arts Forum in Santa Barbara, California, curated by Warren Schultheis. It was also featured at Spread: California Conceptualism, Then and Now at in SOMArts in San Francisco, CA, curated by OFF Space. Excerpts can be seen here.
Coming or going is a fixed video piece created with custom software in which drums are used to trigger the creation of procedurally generated geometric abstractions and the application of various visual effects. This project was most recently shown as part of Idea Chain, the Expressive Arts Exhibition, at Koç University Incubation Center in Istanbul, Turkey (2015); and was featured in AVANT-AZ at Exploded View Microcinema in Tucson, Arizona (2014), curated by David Sherman and Rebecca Barten. An early version of the software was used in a live performance with live coder Charlie Roberts at Something for Everyone, the Media Arts and Technology End-of-the-Year festival at University of California, Santa Barbara (2009).
The New Dunites is a site-specific media art project comprised of research, an augmented reality application, and an interactive multimedia installation. The project investigates a culturally unique and biologically diverse geographic site, the Guadalupe-Nipomo Coastal Dunes. Buried under these dunes are the ruins of the set of DeMille’s 1923 epic film, “Ten Commandments.” The project employed Ground Penetration Radar (GPR) technology to gather the data on this artifact of film history. In an attempt to articulate and mediate the interaction between humans and this special environment, the New Dunites project, led by Andres Burbano, Solen Kiratli DiCicco, and Danny Bazo in collaboration with Angus Forbes and Andrés Barragán, constructed an ecology of interfaces (from mobile device apps to gallery installations) that made use this data as their primary input. The artistic outputs include interactive data visualization, physical data scultpure, a novel temporal isosurface reconstructions of original film, and video documentation describing the data collection process and introducing the project as a work of media archaeology. The project was selected for an “Incentivo Produccion” award by Vida 13.0 and has been presented at the Todaiji Culture Center in Nara, Japan. A write-up of the project was published at ACM MM’12.
Cell Tango is a dynamically evolving collection of cellphone photographs contributed by the general public. The images and accompanying descriptive categories are projected large scale in the gallery and dynamically change as the image database grows over the course of the installation. The project is a collaboration with artist George Legrady and (in later iterations) composer Christopher Jette. Cell Tango was featured at the Inauguration of the National Theatre Poitiers, organized by Hubertus von Amelunxen, Poitiers, France (2008); as a featured installation at Ford Gallery, Eastern Michigan University, Ypsilanti (2008-2009), curated by Sarah Smarch; as part of “Scalable Relations,” curated by Christiane Paul, Beall Center for Art & Technology, UC Irvine (2009), as a featured installation at the Davis Museum and Cultural Center, Wellesley College, Wellesley (2009), curated by Jim Olson. Sonification was added and premiered at the Lawrence Hall of Science, UC Berkeley (2010); and featured at the Poznan Biennale, Poland (2010). More information about the project can be found here.
Data Flow consists of three Dynamically generated data visualizations that map members’ interactions with Corporate Executive Board’s web portal. The three visualizatons are situated on the “Feature Wall” from the 22nd to 24th floor of the Corporate Executive Board Corporation, Arlington, Virginia. The three visualizations of Data Flow each consist of three horizontally linked screens to feature animations in 4080 x 768 pixel resolution. The flow of information consists of the following: CEB IT produces appropriately formatted data which is retrieved every ten minutes by the Data Flow project server and stored in a local database, where it is kept for 24 hours. The project server also retrieves longitude and latitude for location data and discards any data that does not correlate with the requirements of the visualizations. The server stored data is then forwarded to three visualization computers that each process the received data according to their individual animation requirements. Data Flow was developed in collaboration with George Legrady in 2009, commissioned by Gensler Design. More information about the project can be found here and here.