Emerging...

Immersive Methods

On Google Maps the town of Fort Collins, Colorado is 1,048.1 miles from Uncertain, Texas—a physical distance; while this account of the physical mileage is utterly reliable, it also complicates my effort to relay a story about a place experienced in my memory, in the reanimations of words and photos, but truly understood only in the place. The more I think about the project, the more I see that the technological representations, as well as the physical distance, in their seeming stability, are interrupted by my memories and understanding so that they are ruptures more than record.

In tangible, more reliable terms, the making of this story used technologies from kayaks to VR cameras and called for the navigation of bayous and MySQL databases. This story and the research informing it enact an immersive methodology for exploring the expanded notions of textuality afforded by AR story--and to explore the virtual reality video that conveys (some) of the experience of Caddo below.

Inquiry

I started my study to satisfy my curiosity about what the augmented reality (AR) medium is. During the study, this initial question transformed from “what is it?” to wider, more philosophical considerations of what it means to write on the world. I first approached this broad inquiry into “what AR is as a rhetorical space” by attempting a comparative interface critique. Using free mobile apps, the most accessible form of AR at the time of this project, I wanted to understand the genre conventions of these apps and their content. I tired quickly of this approach. I wanted instead to create a story myself to work through the actual act of making, writing, and learning about place and AR through this medium.

When we recognize that some kinds of digital stories are geographically bound--e.g., rooted in particular places--we can see, as I did, that creating such a storyscape means that my readers/users can only experience the storyscape in the place. This complicated my philosophical inquiry and caused me to examine the affordances and limitations of writing storyscapes in AR.

As consumer applications of AR are still emerging media at the time of this writing, my inquiry into this technology incurs the risk (nea, the certainty) of the specific applications I reference becoming dated or replaced with other similar, improved applications, or simply by applications that I have yet to learn about.

While the mobile applications of AR are relatively new media, augmented reality as a concept and as an engineering tool has had numerous applications since Thomas Caudell of Boeing coined the term almost twenty-four years ago in the Proceedings of the Twenty-fifth Hawaii international conference on system sciences among other emerging conversations about hypertext and legal documents, “relational tone” and computer interface design, and the indexing needs of teams of workers accessing files remotely (Mital and Gedeon, 1992; Walther, 1992; Wanninger, 1992). Caudell’s definition of AR anticipates AR as a productivity tool:

“The enabling technology for this access interface is a heads-up (see-thru) display head set (we call it the “HUDset”), combined with head position sensing and workplace registration systems. This technology is used to “augment” the visual field of the user with information necessary in the performance of the current task, and therefore we refer to the technology as “augmented reality” (AR)” (Caudell and Mizell, 1992, p. 660).

Caudell’s conception of AR involves accessing AR through a head-mounted display (HMD) like that of Ivan Sutherland's “Sword of Damocles” is cited often as the first HMD, created by Sutherland’s team at the University of Utah in 1968, or, more recent AR HMDs like Microsoft’s HoloLens or Google’s now-defunct Google Glass project.

While these “see-thru” displays are making appearances as productivity tools for navigation and CAD modeling among many other applications ("Wikitude Drive", 2010; SpaceX, 2013), AR has also evolved as a tool for artistry and activism.

In a special issue of Convergence: The International Journal of Research into New Media Technologies, media scholar and artist Ozge Samanci draws comparisons between “site-specific” art of the 1980s and 1990s with “mixed reality” location-based exhibits that call upon the immediate context of the installation to inform the meaning; in both cases—the site-specific art of the twentieth century and the mixed reality, geolocation-based art that AR allows for—”out of context it will still generate meaning, although the meanings may be completely contrary or irrelevant to the original” (Samanci, 2014, p. 15).

In the same issue, Dutch artist Sander Veenhof explores the implications of AR content as “uninvited exhibition of AR art within the walls of the iconic Museum of Modern Art in 2010” (2014, pg. 11). Various artists submitted artwork coordinated by a larger organization, Manifest.AR, who then geo-tagged the artwork within the MoMA and developed a Layar application for visitors’ mobile phone; the artists collectively created a mixed reality exhibition wholly unsponsored by the museum. Such applications of AR suggest the activist bent of the medium in many of its uses.

Other AR developers have used AR for activism in places where physical protest is either dangerous or impossible to exist continuously in the place. In Spain, the Citizen Platform against the Citizens’ Security Law and Penal Code Reform organized “Holograms for Freedom” as a virtual protest against Spain’s gag laws which, according to the organization, “criminalizes the right to protest” and is a direct “attack on the right of freedom of assembly” (“Project”, hologramasporlalibertad.org). The group organized a remote protest of over 17,000 participants and then projected the holograms outside the Spanish Parliament building in April 2015 (Nosomos Delito).

Digital media and rhetoric scholar John Tinnell argues in Techno-Geographic Interfaces that “the relationship between text and context becomes a critical rhetorical issue” in that “particular texts always shape a text’s meaning” but “AR textuality is shaped by the context at the level of form; the textual field is always permeable and transparent” (Tinnell, 2014, p.80). Thus the blending of interface and physical world create new ways of writing in and understanding those worlds.

As I sat writing this section of my reflection in the coffee shop in my building, I looked up from my computer when I overheard the barista and another man at the bar discussing virtual and augmented reality. We started up a conversation about the Spanish protest I had just outlined above, and we discussed the uptake of AR mobile applications. They asked if I played Ingress, one of the most downloaded AR games. They informed me that Fort Collins has several “portals” created by users of the game.
The man at the bar brought up the map of the city on his laptop as I turned on the application on my phone. I’d downloaded Ingress months earlier when my project started but I’d never played. He pointed out the window to a painted electrical box across the street from the coffee shop and told me there’s a portal in it; users playing the game create these portals and they become ways to “gain territory” in the game. The box across the street is part of a city initiative to beautify the electrical boxes by hiring local artists to paint the exterior of the box.
They showed me another AR game, Father.io, that recently raised over $300,000 from five thousand backers on an Indiegogo crowdsourcing campaign. Father.io is a massive laser-tag game played in teams in an urban space navigated by the app on their mobile phones.
I moved back to my chair in the corner of the coffee shop and stared out the window at the electrical box with new amusement. I opened the game on my phone and the Ingress game displayed a message:

“exotic matter of unknow origin is seeping into our world.”

Researching AR Applications

For writing in and understanding AR as a medium, I first set out to find the application best suited for my purposes as an author/developer.

Wanting to explore the medium in its mobile form, I opened up my mobile phone’s application store and searched for augmented reality apps for Android. To narrow the field of available AR mobile applications, I searched for free applications that had more than five-hundred thousand downloads to engage with applications whose developers likely had the most data for troubleshooting initial glitches in the apps.

I also limited my search to AR apps that paired geo-location data with textual overlay. While many AR applications allow for overlaying images or animations over the phone’s camera interface, I chose to focus on text rather than image integration. Owing to the constraint of the field of rhetoric and composition as I understand it now, I thought focusing on an alphabetic text would provide me with a familiar writing process. Future researchers may explode my notions of textuality while still undergoing AR storyscape processes that mirror my own interpretive process here.

Three applications emerged as useful for viewing place in AR: Wikitude, Junaio, and Layar. Each of these applications uses the phone’s camera to create the “see thru” interface that Caudell envisioned. Among the three, Layar seemed to have the most robust and usable developer documentation and has an add-on called “GeoLayar” that displays geo-tagged content.

About Layar

As I discovered, naming and classifying uses and terms in emerging media proves difficult. (Live wells, storyscapes, etc.) It seems Layar, the company, encountered similar problems. As I’ve discussed the application with various audiences, I’ve encountered a persistent problem of distinguishing between what I created and what is part of the Layar platform, which introduces other confusion about how users/readers access my story.

To clarify, I will attempt to provide an incomplete but useful metaphor. Layar is an AR browser for mobile phones. What this means, I learned, is that much like a browser that accesses websites, Layar is a browser that accesses AR-enabled content. This can look like geolocation data or “interactive print” data—images that invite viewers to interact with print media on their mobile phones.

“Layar” as an application/platform name causes some confusion as well. While the platform simultaneously calls itself “Layar,” it also invites developers to create “layars”, such as the geolayar that I created for my storyscape. Geolayars differ from interactive print “layars”; they offer a mobile-guided view of geo-tagged text in a particular location. For instance, a developer/author might create a Geolayar that shows discrete historical markers in a town. Other geolayars aggregate data from other mobile apps, like the “TweepsAround” a geolayar that shows the tweets in any given area. Users position their phones in different directions and the screen shows floating images or Tweets that Twitter users have recently geo-tagged around them. The images float in front of the camera image of the physical world around their phones.

The amount to which users and writers are critically aware of the way that third party applications access their posts from publically accessible geolocation data, and the privacy implications of who can see the text, remains outside the scope of this research but suggests a rich site for future research on the intersections of AR and privacy.

Screen shots of Immersion live wells
Developing my Geolayar/Storyscape

As I explained briefly above, throughout this process of developing using Layar, I came to see Layar, Wikitude, and Junaio less as “apps” and more as AR-enabled browsers. Much in the way that web browsers like Firefox, Chrome, or Internet Explorer all access web servers and display HTML-based content, AR browsers like Layar access web servers and display AR-enabled content. In theory, the storyscape I’ve written in Layar should also be viewable via the other AR-browsers I have mentioned such as Junaio or Wikitude. They constitute a new genre of browsing capabilities. This browser metaphor is technologically incomplete, but I invoke Borges “most perfect map” that is, inch for inch, the same as the territory, to suggest that all metaphors and maps and Geolayars are.

With Layar selected as my preferred development platform, I began developing the GeoLayar that would become the medium of my storyscape. Through hosting a MySQL database on my server (MySQL is a relational database management server that stores and delivers content from networked computers), I connected the data of my story with the API (Application Program Interface) at Layar to make my story accessible to their browser. Any user with the Layar browser on their mobile phone can access my narrative by either scanning the QR code on the story’s website, or through using the search feature in Layar to search for “Immersion”. (Much like any user can access a website by putting the URL into browser, my story has its own unique identifier that directs the Layar browser to my server and to the actual content of the story.)

Variables in pHpMyAdmin
Writing the story

The API (or software protocol) that Layar has created meant adhering to strict parameters for the amount of content I could fit into the “live wells” of the narrative. Each text entry is limited to a 60-character “title” (including spaces and letters); a 140-character “description”; and a 90-character “footnote.” In order to maximize the use of characters, I treated each of these sections as verse-like spaces to include various parts of the live-well’s text. The title draws attention to the context or subject of the live well; the description expands on the theme, and the footnote draws the story outward again, almost as a dramatic aside.

I input each live-well into a MyPhpAdmin database table and set about the task of finding the latitude and longitude for each live-well to assign to the table.

Screen shots of Immersion live wells
Gathering the data points

For the live wells to show up in the storyscape in a particular location, I had to gather the coordinates of each live well. Layar calls these nodes “POIs” or “points of interest”. Each live well POI places the live well in the correct spot using the latitude and longitude specified in the table.

Armed with my water-resistant notebook and sharpie and a waterproof box for my mobile phone, I set out on the lake in my kayak. At each spot—some pre-determined by the content of the story, some inspired by the surroundings at the time—I dropped a pin in Google Maps on my phone and wrote down the coordinates in my notebook. As much for backup as for minimizing the risk of dropping my phone in the lake, I opted for recording my data in a paper notebook because I find it faster than typing on a phone keyboard, and more reliable than voice-to-text on my Nexus 5 mobile phone.

POIs in Layar Developer Interface
Selection / Arrangement

Each live well appears in a geographically significant location to the storyscape; they either call attention to the place of the action (such as the narrative of the Mittie Stephens’ wreckage) or what I know of the place in various seasons. Some live wells could indicate a different reading depending on the season of the lake, as the season drastically affect the vegetation and the overall look of the lake.

After assigning all the data points to the live wells, I returned to the lake in the kayak with my phone to view the story as a user might. I noted that some of the data points felt crowded in the screen, and that the story, as viewed through the mobile phone, jumped from live well to live well, sensitive to the direction of the phone relative to the data points. I considered separating the live wells more by moving them further away from each other, but opted to leave them be—the flickering of the narrative recalls the “flickering signifiers” that inform all digital texts (Hayles, 1999).

Kayak and cameras on my head
Documenting the story remotely / Virtual Reality

Finally, the problem of creating a geographically, place-bound digital narrative meant finding ways to convey (at least in part) the experience to my committee and my cohort remotely. To overcome some of distance, I recorded the experience of being in on the lake, experiencing some of the live wells with virtual reality (VR) cameras.

I chose the immersive, multi-dimensional medium both to step deeper into the immersiveness of the technology and to relay more of the sense of place of Caddo than a traditional camera allows for.

At the time of this story, higher quality virtual reality consumer products like the Oculus Rift and Samsung’s Gear VR had just come onto the market. With such displays available, camera makers are also catching up to the consumer VR recording market. I was limited in price and in availability of VR-capable devices, and chose the Kodak Pixpro360 camera for its convenient size and because it comes with a waterproof, head-mountable casing—a necessity for taking it into the kayak and allowing my hands free to paddle and use the phone.

The Pixpro360 camera that I used acts as super wide-angle lens, capable of recording 270 degrees of view, not quite enough for full 360 degrees of complete immersive experience. To overcome the limited perspective of one camera, I used a second head-mounted Pixpro360 to capture the remaining field of view. I then attempted to stitch the two sets of footage together in Adobe AfterEffects. This integration of perspectives proved more difficult than I anticipated, as I explore in my implications section; however, the immersiveness of the VR experience remains. The only difference in viewer experience is that the view around the edges of the camera blurs at the periphery and a black spot appears directly behind the field of view.

YouTube 360 allows users to upload 360 content such as the footage from my Pixpro360 camera that YouTube then optimizes for Google Cardboard. Because of both the file size and the more affordable delivery system, I opted to host my videos on YouTube 360 to embed them here throughout my narrative.

The following 360 video requires Google Cardboard or GearVR and a mobile phone to view. View instructions for more information.





These immersive methods—creating the storyscape with Layar and my own geolayar, going out into the lake during the flood, and recording the story via VR cameras—allowed me to explore the medium and creation of my geographically bound story. While participatory methods of creation and post-process reflection are not new in the field of rhetoric and composition, my application of this immersive methodology mean considering new technological factors and emerging media. Going to a place that is both significant to me personally and rich with metaphors of uncertain boundary-lines provided ample space to explore the implications of this new medium.