Wednesday, December 17, 2014

Davidson Mapping App Citations

Citations
Didem Ozkul and David Gauntlett. “Locative Media in the City: Drawing Maps and Telling Stories” in Mobile Stories
“Frequent Questions: Records Management.” U.S Environmental Protection Agency. August 3, 2012. Accessed November 20, 2014. http://www.epa.gov/records/faqs/geospatial.html
Stephen Ramsay and Geoffrey Rockwell.  “Developing Things: Notes Towards an Epistemology of Building in the Digital Humanities” inDebates in the Digital Humanities
OpenPlans, 2014 GeoServer 2.6.x User Manual, accessed November 20, 2014, http://docs.geoserver.org/stable/en/user/introduction/history.html

The Davidson Mapping App and the Digital Humanities

Project and the Field of the Digital Humanities:
One of the biggest challenges that Digital Studies as a whole faces is how abstract and complex its fundamental products are. There is often a high threshold for complexity leaving many creators and users confused because they lack specific knowledge of the particular processes of the Digital Studies. In the realm of Digital Mapping, the various coordinate systems, terms, and file types prevent creators from going as far beyond traditional mapping ideas as they could and keeps the layman away from using the products, as only those immersed in the study of Digital Mapping can understand the scope of the complexity of some of the projects. They may be able to see the big picture, but unlike text files or images, very few understand the mechanics of the processes. With the Digital Map App of Davidson College, I hope to create a simple linking of the ideas of geospatial data that serves as both a practical tool for navigation as well as a simple example of how geospatial data can be approached by an average user.
Stephen Ramsay and Rockwell’s “Developing Things: Notes toward an Epistemology of Building in the Digital Humanities” offers a clear and concise summation of the general issues surrounding digital studies as a whole. Ramsay and Rockwell argue that the abstract nature of Digital Studies as a whole has left many within the community questioning and arguing about what the definition is. This is in part due to the wide range of complex ideas that are a part of different segments of the Digital Humanities, how “but their work is all about XML, XSLT, GIS, R, CSS, and C” (Ramsay, Rockwell). While many average users of computers understand how text can be bolded oritalicized and at least know that JPEG and PNG files refer to images, for most people the aforementioned file types are simply gibberish. In addition, in Ramsay and Rockwell’s discussion, there is no uniformity in the use of these file types across even the subsections of the Digital Humanities. Not every map is made with GIS, and not every program is run by C. This complexity is part of why it seems Ramsay and Rockwell have left out discussion of the Digital Humanities for the common man, the only noticeable omission in the article. While I would have liked the discussion there, if the Digital Humanities departments cannot define themselves, then it would be difficult for a layman to have any idea where to start.
There are many attempts to explain the concept of geospatial data, a important concept to the subject of digital mapping, to the layman with mixed results. The Environmental Protection Agency (EPA) attempts to define geospatial data for those wishing to keep records, but their definitions and procedure reveal the outdated approaches that those working outside the Digital Humanities often take, whether out of ease or necessity. While offering advice on how to store records, the EPA states that “Geospatial data records are often in special formats (e.g., oversized paper maps or data sets). Therefore, it is especially important to identify the geospatial data records with appropriate metadata, so the records can be easily accessed and retrieved with other, related records” (Environmental Protection Agency, Frequent Questions about Geospatial Data and Records). Rather than believe the EPA is ignorant to the more condensed ways of storing geospatial data, it rather seems that they must suggest less compact ways of storing data by the virtue that they are simpler for the user in the face of the overwhelming complexity that shapefiles and raster layers may bring to the uninitiated record keeper. While the FAQ may not be a good robust description of the idea of geospatial data, it must limit itself to inefficient simplicity in order to explain itself to users.
However, even without the need to focus on practical applications like record keeping, the definition of geospatial data can remain elusive. Even the handbook of geospatial data, a “user manual” for those who are trying to understand geospatial servers, must resort to relating text and webpages into its language in order to convey just what geospatial data is. While the guide book makes the claim that “Soon a search for spatial data will be as easy as a Google search for a web page (OpenPlans, GeoServer 2.6.x User Manual) they also bring up “browser” based systems and offer very few concrete examples that truly explain what geospatial data is supposed to be. The handbook tries to argue that geospatial data is fundamentally different from other types of data, yet only describes it using comparisons.
However, to understand geospatial data one only needs to look as far as the concept of spaces and places in people minds, commonly referred to as a “mental map.” Ozkul and Gauntlett’s  “Locative Media in the City: Drawing Maps and Telling Stories” in Mobile Stories, serves as both an easy to comprehend discussion of what mental maps are as well as how people view geospatial data within their own minds. In their study, users were asked to “draw a map of London showing ‘frequently visited places’” (Ozkul Gauntlett 114). What surfaced did not take the form of raster layers, CSS code, or shapefiles placed by a complex coordinate system. Rather, people drew pictures and words in order to explain how geospatial data related to the real world. They also discussed concepts outloud that described how they viewed geospatial data, though they might not have personally called their ideas as such (114). This thorough discussion highlights one of the key difficulties that surrounds the abstract nature of many discussions on digital humanities. Text, pictures, and other common forms of data are not separate entities from geospatial data but rather simply another lense with which to view the various types of data that make up the world as a whole.
Data is not nearly as sectioned off into buckets of categories with no overlap as those who are obsessed with the quantitative over the qualitative might want you to think. Images like photographs can easily contain text, from a photo of a book to a simple captioned image. Text can be used to create images such as ASCII[1] art or emoticons[2]. The tools we use to create these are the same at their base as well. Webpages are made up of pixels which create both text and images, all of which are founded in the same code. There are different tools that produce similar results, but it is not the intrinsic makeup of these types of data that defines what they are but rather how we as people choose to interpret them. Likewise, geospatial data doesn’t need to be made up of completely different types of components from webpages or any other medium. What geospatial data does is combine the same elements that we use daily to produce other types of data in a way that people interpret as having to do with the space and place around them. This simplicity is something I hope to achieve with the Davidson Mapping App I am creating with the MIT aiAppInventor software[3].
Rather than trying to keep a purity of only geospatial data, the Davidson Mapping App attempts to look at text and image data through a geospatial lense. The current Davidson map[4] uses shapes and symbols as primary indicators of space, yet often that is confusing since people don’t tend to think in terms of those particular symbols but rather in terms of descriptions and mental pictures (Ozkul, Gauntlett). Therefore, the Mapping App adds textual descriptions and identifiable images to the available data to give users the best sense of where these spaces are, what they look like, and what they contain. Practically speaking, the text gives the buildings a sense of what they are commonly used for and the specific areas inside them, such as Hance Auditorium on the fourth floor of Chambers which, according to several Davidson students, was a very difficult place to locate the first time. The images help give the users’ mental maps a better foundation than the symbols; rather than simplistic shapefiles to go off of, users can have an image of the building or space in their minds that matches up very closely to what they will see when they approach the space. However, the app serves a purpose in getting users familiar with geospatial data itself as well.
MIT’s aiAppInventor is a program built around simplicity and therefore is a perfect medium to try to convey geospatial data in a clear and simple manner. The apps are programmed using predetermined blocks of code, which keeps the interface simple for both creators and users. While at first this design may seem limiting, it helps to streamline the application of use. One cannot incorporate GIS files or Excel data spreadsheets into this tool. Therefore, the cartographer and the layman are on common ground and data does not need to be translated from a complicated form back into simplistic terms. The app inventor does not work well for complicated projects, but is a great tool for understanding basic components of data and for presenting those components to a user.
In order to get definition at the higher levels of digital studies, we must first people to explain ourselves on simple terms to the average person. While there will always be an important place for discussion at the higher level of the subject, it’s important to make the Digital Humanities to be as accessible as possible for the common person as basic math, science, language or art is. Tools that appeal to our interpretation of geospatial data rather than the semantics about it will help us better understand what the essence of Digital Mapping within the Digital Humanities really is.
[1] ASCII art is made up of pictures using only the 128 characters from  American Standard Code for Information Interchange.
[2] Emoticons use the characters on a keyboard to denote certain facial expressions or emotions.
[3] http://appinventor.mit.edu/explore/
[4]http://www.davidson.edu/Documents/About/Visit/Campus%20Map/Campus-Map-8-5×11-2013.pdf

The Process of Creating the Davidson Mapping App

Part 1: The Idea
The idea behind the Davidson Mapping app was one grounded in simplicity and practicality, meaning that the idea was to come up with something that was both a simple concept while also having practical application. Simplicity of concept was necessary because the design tools available were all complex in their own way, and the last thing a project with an impending deadline needs is a complex idea to be executed by a complex process. As for practicality, keeping a user-base in mind while designing the project would help keep the project focused instead of abstract; I would be making something that would work for people rather than just look pretty.
The basic idea, before any platform was chosen, was to create something that helped people; students, parents, teachers, ect.; get around campus and know where places were. For example, many visitors and new students are often confused about locations such as the Duke Family Performance Hall and the Lily Gallery, as those are located inside other buildings and therefore often not included in visual maps that only show buildings. In addition, sometimes the colloquial names of buildings are not indicative of what they are used for, such as Chambers being the main space for English classes, or Sloan being the music center. Therefore, the plan was to create a platform where those who were confused could understand more about the space of Davidson easily.
Part 2: Creation
Figuring out what platform to use was the first integral step to this process. I had to decide between two major options. The first was a website, which would be able to handle various levels of complexity to fit my vision for the project. However, accessibility would most likely be limited to computers and, generally speaking, people don’t tend to get lost while sitting around using a computer. An app, the second option, would be more conducive to this audience since it could be used when they are out and about around campus, but the app program, AI2, only works for Android phones and has less complex functionality. In the end, I went with the app because the decrease in complex functionality probably wouldn’t hinder me very much, as I don’t have the skills to utilize complexity anyways.
Throughout the process of creating the app, I hit a few design blocks. Originally, I had programmed the app to take users to different screens featuring each location. However, this function served to bog the app down an incredible about as keeping many screens available required a lot of processing power. Therefore, I changed the design to feature various hidden components that could be revealed. Another issue I had to overcome was the file size of the images I used. When buttons representing each location were clicked, an image and corresponding description of the building would be revealed on the screen. When simply scaling down the pictures in the app, the large file size still remained in the app’s files. Therefore, I had to manually decrease the quality and size of the images so that when put into the app’s set of media, the file size would not be ridiculous. The final problem was the lack of a search function. I had originally intended for users to be able to search for places within the app, yet that function was proving to be too difficult to program. Therefore, I included two additional functions. The first was an image of the Davidson campus at large, which would help users pinpoint where they were. The other function was one that connected to a web browser that brought up the Davidson.edu search engine. In this way, users could look up any location that they either could not find or was not included in the app.
There are a few additional parts to the creation of the app. Most of the programming is redundant copies of code for different objects that need to behave the same way. The first and main screen of the app has the most programming, and the buttons are designed to toggle the visibility of the various groups of objects. There is also programming to set all these objects as invisible, as the app sets the elements visible by default. Additionally, the text must be spaced appropriately in the designer, as the space you can see in the app creator is not the same as what will appear in the actual app; a lot of trial and error was necessary to keep the text from overlapping with different parts of other texts. The other buttons simply navigate the screens.
Part 3: Results and Moving Forward
After beta testing, the feature most desired was specific directions to the various locations based on where you were when you accessed the app. The descriptions as of now have general directions to the various buildings, namely by saying what other buildings and features neighbor the structure in question. I do believe it is possible to attain this, but the programming is complex and wouldn’t meet the primary deadline and it is not essential to use of the app. Other features brought up when showcasing the app are as follows. Firstly, during better season I will need to update the pictures with more flattering weather as a backdrop or otherwise dive into the archives to find some quality pictures of the various buildings around campus. Next, I will need to add more locations to the app, and perhaps even group them. Right now, the locations are limited to main academic buildings and other buildings that would be important for freshmen and families of freshmen, who would most likely be in need of the app. After more time I can add more locations to make the app more comprehensive. After all this, hopefully the app would be in a good position to potentially submit this app to Davidson to be distributed to various parents and students during the year.

Welcome to the Davidson Mapping App Homepage!

Here you can find the Download link for the app, as well as links to the description of the process and the integration of the app into the concepts of the Digital Humanities.

Link to Download App:  Davidson Mapping App

The Process of Creating the App

The Integration of the App into the Digital Humanities

Citations

Wednesday, February 22, 2012

A Sirious Look at iGames and Voice Commands

So, when Siri came out on the iPhone 4 I thought it was a sign of the Robot Apocalypse. Hey, it's 2012. I gots Doomsday on the brain. It's more realistic than my fear of Escarmageddon, when we're all going to be destroyed by snails.

You laugh now...

Anyconspiracy, I have taken the time to test out this "Siri" and I have come to the conclusion that it is not yet intelligent enough to be able to conquer the world. Heck, it doesn't even know that the first Mega Man game came out in 1887.

The nerve of some things.

Anyways, a bit less hypercritical, I assume some of you are wondering what my opinion of Siri is, while the others are simply expecting since it seems that's where this segue is leading.

I guess I need to start out by saying that I'm not really much of an Apple Fanboy. I don't have any particular qualms with the company, I do have an iPod and iTunes and all that iJazz. But in terms of gaming I just can't really get behind it. The elements like the touchscreen and tilt controls aren't unheard of in popular console games, but like everything with video games I prefer it to act as a supplement to the basic "push buttons to win" feel of a game rather than a replacement. It's cool I can select an item from the bottom screen with a touch in the various DS Mario games, but when all I'm doing is tapping the screen as if I were using a button or dragging my finger around as if I were using a mouse without the physical feelings of the true actions, it feels like I'm getting a hampered experience.

It doesn't help that most games, at least most popular ones, seem to be those with no definitive goal. Get the little cannon guy to get as far up the screen as possible, slash the fruit repeatedly and simply hope you get about 50 watermelons at once or something, just doesn't give me the satisfying feeling that comes from reaching the end credits of a game you've been working on for days, weeks, months, or even years. All I feel like I'm doing with these App games is wasting time for the sake of wasting time, with only a rather arbitrary number to show for my efforts.

So yeah, that's my opinion on game apps. But as for Siri? I git it a heartening "Eeh." It certainly isn't bad, there are good uses for it. But it really isn't the whole "amazing do everything for you" it led us to believe, even if that's a fairly unrealistic expectation. But it can set dates! That comes up... sometimes. But honestly, whenever I tell it things it usually just asks he to do a web search for it. I guess all that really does is save typing on those teeny-tine spaces that are supposed to be "keys." Also, any time I mention video games at all it just tells me about places that sell video games that are fairly close to me.

So helpful? Yes. Life changing? No. My two cents, it doesn't buy me an iPhone.

Sunday, February 5, 2012

Skill Level: Unknown

If any of you read my "yes I totally know it's a social network craze" Twitter account, then you are most likely aware of my current forays into Mega Man 5, my all time favorite Mega Man game.

While my analysis of Mega Man 5 will surely come at some later date (or dates, I've got quite a lot to say when that time rolls around), today I'd like to share with you a small secret: I think I'm getting good at this game.

You may ask why I only think that I'm good at the game and that's a completely legitmate question. Now allow me to give you the answer even if you don't care. Cause you came to my blog, and you must pay the penalty.

You see, when I made my first attempt to display my talent to the interwebz in my now unfinished Let's Play of Mega Man 5, I thought I was good at MM5. But the reason I'm not giving you a hyper-link to these videos is because I have come under the realization that I was not, in fact, good at MM5 yet. (P.S. MM5= Mega Man 5). I had made very few ventures into the castle stages due to my nasty habit of playing the intial 8 over and over and over and come tumbling over...

Needless to say, once I hit the castle stages in my MM5 Let's Play things took a turn for the bitter as the failure became so immense that I gave up on Let's Plays for quite some time.

So what makes me even consider I am actually good now? Have I played through the castle stages more? (Yes) Am I more familiar with the weapons? (Yes and yes) Have I played the Robot Master stages more? (Yerp-a-derp). But the main reason, I got two words for you: Perfect Buster Runs.

Wait, that statement had two words when I thought it. Dang fickle brain, why do you torment me so?

Now, before you go singing my praises let me be clear: I can not perfect run MM5. I can only do individual runs of individual stages without getting hit, usually after some trial and error. I find this impressive, but MM5 is slightly notorious for being easy. Right now, I can beat Gravity, Gyro, and Star Man's stages without getting hit and using only the Mega Buster.

So what do you think? Am I skilled, or just full of myself?