In my last post I discussed why video games should be studied, and why the digital humanities is a field uniquely suited to carry that torch. There is one key to being able to properly study a particular video game that I had not addressed, though: the continuing existence of that video game. Unfortunately, the preservation of video games is hardly a straightforward affair, and there are several aspects that must be considered and hopefully addressed when preserving a video game for future use and study.
1) Source code vs. ROM
First, there's the issue of what exactly you should preserve to preserve the game itself. Many people would probably think, "Hey, I've kept around this old NES cartridge of Zelda II: The Adventure of Link. That's good enough, right?" Well, not necessarily and not really. See, the actual file you keep on that cartridge (or on a disc, on your hard drive, in the cloud, whatever), is actually a Read-Only Memory file, i.e. an iteration of that game as software formatted to run in a particular environment (e.g. your NES). So, if you want to save the file that says how the game should function, you want to save the source code, which is something usually only the developer has. And unfortunately, many developers don't keep that around as well as they should once a game has been released, either because they don't care or don't really understand the value of their source code vs. the released product.
2) Hardware & Software
Okay, so let's say you've successfully got a copy of the game preserved that you should be able to play. Now you just need to have kept around something to play it on, or have the capability to modify or build something to play it on. This is pretty straightforward to understand if what you have is something like an NES cartridge: you need the right video game console (or replica of it). But what if you've only saved the software from that cartridge as a ROM file? Well, hopefully you can get an emulator or virtual machine to run it on your computer. And at that, an emulator or virtual machine that can run it as it was meant to run, since you're clearly not running it on the hardware and software that ROM was designed for. So in addition to preserving the game, you have to at least preserve the specifications of what it was supposed to run when it comes to hardware and software.
3) Interface
Speaking of hardware, what about the interface components that define how you interact with the game? Is it adequate to play an NES game on an HD television, when it was designed with CRT televisions in mind? In other words, you want to consider also preserving the type of visualization interface it was designed to be displayed on, or at least replicating what that type of interface would have provided. In addition to the visualization interface, there are also the input interfaces as well, such as a keyboard, mouse, or game controller. To truly replicate the game experience as designed, you really should have the same type, if not identical, methods of inputting your control of the game.
4) Multiplayer Gaming Environment
Multiplayer games get their own bonus entry on this list! The reason for this is relatively simple: when the experience in a game is reliant upon there being a multiple number of players in the game, it's pretty much impossible to replicate the game as designed by yourself. Now, this isn't as much of an issue with games that feature a limited number of multiple players, such as 2 to 4, although it obviously does require having around enough hardware and software to support all those players at once. However, what is truly difficult, and presents a unique challenge in preservation, are massively multiplayer games, such as World of Warcraft, which are intended to have a vast community of players online at the same time. Without the player base, such games are significantly altered in environment.
As you can see, there are a myriad of challenges to be overcome when it comes to preserving video games for the future, some of which I still don't see good solutions for (such as the preservation of massively multiplayer games without enslaving masses of people to keep playing it for you into eternity).
Thursday, April 25, 2013
Tuesday, April 9, 2013
Video Games: Art or Not, Why They Should Be Studied in the Digital Humanities
Whenever a jarringly new creative medium emerges, there comes without
fail a debate about the merits of approaching and analyzing such
media. This has happened with photography and film, two media which we
today do not question the merits of their critical studies (barring, of
course, fringe voices), and we now see the same debate raging anew over video
games. One of the most common forms such debate takes is whether or
not the new media can be considered art, and with video games this has
been a very public debate, as exemplified when Roger Ebert declared "video games can never be art."
While I very much disagree with Ebert's opinion (to an extent; I have a subjective definition and will agree that they were never art to him, but that does not mean they cannot be art to others), this kind of debate is
very much welcome, as it intrinsically creates critical discussion of
such works, and helps forge a new discourse for how this medium should be studied, regardless of its status as art or not.
Since the study of art is only one aspect of the humanities (as the field is also engaged with culture, history, et al.), I can think of no field of study more suited to picking up the task of engaging video games critically than the digital humanities. Video games are objects that do not lend themselves well to many of our extant discourses because of their amalgamation of technology, design, creativity, and dynamism. The digital humanities not only supports the interdisciplinary, multimedia aspect of video games much better than many other fields due to the interdisciplinary nature of the humanities in general, it is also one of the few fields of study to already critically engage other dynamic texts (e.g. such as in critical code studies, since code is practically always dynamic in how we can interact with it).
Granted, new media studies is another more recent field that can also pick up this mantle and help us engage with video games critically. But what it cannot offer that the digital humanities can is the amount of support needed to connect this new mode of discourse with other extant humanities discourses. In other words, digital humanities acts much better as an interdisciplinary communicator among various disciplines, and new media studies is but one of those humanities disciplines which the digital humanities can include in such a diverse engagement. Video games should be studied as new media, as narratives, as historical objects, as visual design, and as more, and the digital humanities will allow such a complex discourse to be created around them.
Thankfully, I'm not the only one thinking this. In 2012, the Journal of Digital Humanities had a special section devoted to the intersection of video games with the field of history. While this is far from perfect (in that it only looked at this media from one field's point-of-view, whereas the digital humanities allows for a forum to compare the different insights available through different fields of study), it is still a welcome step in the right direction.
Since the study of art is only one aspect of the humanities (as the field is also engaged with culture, history, et al.), I can think of no field of study more suited to picking up the task of engaging video games critically than the digital humanities. Video games are objects that do not lend themselves well to many of our extant discourses because of their amalgamation of technology, design, creativity, and dynamism. The digital humanities not only supports the interdisciplinary, multimedia aspect of video games much better than many other fields due to the interdisciplinary nature of the humanities in general, it is also one of the few fields of study to already critically engage other dynamic texts (e.g. such as in critical code studies, since code is practically always dynamic in how we can interact with it).
Granted, new media studies is another more recent field that can also pick up this mantle and help us engage with video games critically. But what it cannot offer that the digital humanities can is the amount of support needed to connect this new mode of discourse with other extant humanities discourses. In other words, digital humanities acts much better as an interdisciplinary communicator among various disciplines, and new media studies is but one of those humanities disciplines which the digital humanities can include in such a diverse engagement. Video games should be studied as new media, as narratives, as historical objects, as visual design, and as more, and the digital humanities will allow such a complex discourse to be created around them.
Thankfully, I'm not the only one thinking this. In 2012, the Journal of Digital Humanities had a special section devoted to the intersection of video games with the field of history. While this is far from perfect (in that it only looked at this media from one field's point-of-view, whereas the digital humanities allows for a forum to compare the different insights available through different fields of study), it is still a welcome step in the right direction.
Monday, April 1, 2013
Marking-Up Multimedia
One of my favorite aspects of the digital humanities is TEI, and how it allows scholars to embed metadata with semantic weight into the text itself. We no longer have to present a reading of the text simply as an external artifact, we can integrate that reading into the digital rendition of the text. For example, if we want to examine the role of location in a corpus, that corpus can be tagged directly for instances of place, and through this tagging we can literally manipulate the texts themselves to extract meaning for display and manipulation purposes.
Since this type of mark-up was designed with textual documents in mind, it works fantastically for them, but runs into problems when we try to expand it to include multimedia outside of strictly textual materials, such as audio and video. But why limit ourselves to simply marking up transcriptions of lyrics and dialogue, or to scripts?
Now, I'm sure you're wondering, how can you adequately mark-up something that you hear over time? Or visuals that are constantly in motion? These things are non-static in how we experience them, but they do not have to be non-static when we mark them up. To view these as static, I would suggest turning to non-linear editing for a proposal on how to view them as static in such a way that would be adequate for adapting TEI to such media.
For those unfamiliar with what non-linear editing looks like, here's a screencap from Adobe Premiere to help illustrate it:
The part you want to pay especial attention to there is that timeline sequence along the bottom, which has clips of audio and video (those colorful boxes) arranged along it. That's a view commonly used for non-linear editing. (N.B. It's called non-linear because you can jump to and from anywhere on the timeline with disregard for original linearity, unlike in traditional, physical editing.)
Now, imagine you have an interface similar to this, except instead of loading multiple clips to edit, you have but one object on the timeline, which would be whatever object you'll be marking up. The upper right pane would be used for what it is now: as a "monitor" through which you can either view (or listen) to the content you are editing. What that pane to the left of the monitor would then be for is editing the TEI tag(s) being opened or closed at that particular point on the timeline (on which your cursor would currently rest). The granularity of this temporal mark-up could be set to a specified interval, such as tenth or hundredth of a second, allowing each project to be as precise as necessary as to when tags open and close.
By working in conjunction with a timeline, you could also "wrap" portions of the file in the appropriate tags. Say a particular character gives a speech from 2h10m05s in the footage until 2h13m18s; you could then highlight that portion of the file in this view and then wrap the selected time segment with the <said> element, which would insert an opening tag at 2h10m05s and a closing tag at 2h13m18s.
Of course, this is just a nascent idea, and there would be technical specifics to be worked out, such as supported multimedia formats, software for such editing, and how this could actually be made beneficial on the consumer's end (e.g. how can this be used to dynamically enrich the manner in which a researcher studies this object?), but it would be wonderful if it could be made to work as a way to encode multimedia that has a temporal dimension. (Or, if there's something out there like it already, I'd love to know of it.)
Since this type of mark-up was designed with textual documents in mind, it works fantastically for them, but runs into problems when we try to expand it to include multimedia outside of strictly textual materials, such as audio and video. But why limit ourselves to simply marking up transcriptions of lyrics and dialogue, or to scripts?
Now, I'm sure you're wondering, how can you adequately mark-up something that you hear over time? Or visuals that are constantly in motion? These things are non-static in how we experience them, but they do not have to be non-static when we mark them up. To view these as static, I would suggest turning to non-linear editing for a proposal on how to view them as static in such a way that would be adequate for adapting TEI to such media.
For those unfamiliar with what non-linear editing looks like, here's a screencap from Adobe Premiere to help illustrate it:
Click to enlarge |
The part you want to pay especial attention to there is that timeline sequence along the bottom, which has clips of audio and video (those colorful boxes) arranged along it. That's a view commonly used for non-linear editing. (N.B. It's called non-linear because you can jump to and from anywhere on the timeline with disregard for original linearity, unlike in traditional, physical editing.)
Now, imagine you have an interface similar to this, except instead of loading multiple clips to edit, you have but one object on the timeline, which would be whatever object you'll be marking up. The upper right pane would be used for what it is now: as a "monitor" through which you can either view (or listen) to the content you are editing. What that pane to the left of the monitor would then be for is editing the TEI tag(s) being opened or closed at that particular point on the timeline (on which your cursor would currently rest). The granularity of this temporal mark-up could be set to a specified interval, such as tenth or hundredth of a second, allowing each project to be as precise as necessary as to when tags open and close.
By working in conjunction with a timeline, you could also "wrap" portions of the file in the appropriate tags. Say a particular character gives a speech from 2h10m05s in the footage until 2h13m18s; you could then highlight that portion of the file in this view and then wrap the selected time segment with the <said> element, which would insert an opening tag at 2h10m05s and a closing tag at 2h13m18s.
Of course, this is just a nascent idea, and there would be technical specifics to be worked out, such as supported multimedia formats, software for such editing, and how this could actually be made beneficial on the consumer's end (e.g. how can this be used to dynamically enrich the manner in which a researcher studies this object?), but it would be wonderful if it could be made to work as a way to encode multimedia that has a temporal dimension. (Or, if there's something out there like it already, I'd love to know of it.)
Subscribe to:
Posts (Atom)