Tuesday, September 25, 2007

Google Trends (more on my beloved Google)

More on Google tools. This time it's all about Google Trends. You can use this application to track searches (done through Google) and news reference volume for any topic. You can change the length of time you're looking at, the regions where you're looking, and get all sorts of other interesting data.

For example, you can see how many people search for Emory and can correlate spikes in searches to newsworthy happenings around campus. Emory looks pretty good until you put some of our "peer" institutions into the mix. Alas.

In any case, this is an interesting tool with different applications. If you're in an English department, you could track the interest of the online community in authors or particular texts (although be wary of how you use search terms like "Lolita"; to get the right response, you need the author's name too). If you're into politics, you can watch not only the spectacular demise of Sen. Larry Craig, but also the spectacular demise of the nation's interest in the story here. Or you can see how Craig stacks up against some of those in the race for the presidency here. There's a dissertation or a short paper in here for someone who wants to analyze the predictive nature of web searches.

The search history only goes back to 2004, and one has to be careful about how you search for trends. But this is an interesting record of what seems to interest us.

P.S. If in writing about this I'm as woefully out of date as Stanley Fish opining about coffee shops, then I hope someone will please tell me to move onto fresh material.

Bloglines vs. Google Reader

I'm curious about feed aggregators. I haven't used them a whole lot except on my igoogle page. I know some of this blog's reader (you faithful few) are partial to Bloglines. I've also fiddled around with Google Reader (given my slavish devotion to the big G and a desire to keep things within one account where possible).

So I'm wondering if anyone can argue for using one service over another. Within Bloglines, can you share your feeds with others? I've discovered how to share materials in G'Reader (you can see mine here or subscribe to the feed here). But what I'd really like to do is to see the feeds that good friends, like RB or SAP, or supervisor-types, like WM or ME, peruse regularly. I'm not seeing that in the Google version at the moment.

In any case, I'm interested in getting opinions here about how you read blogs. Consider this your chance to participate in Web 2.0 and create some content yourself.

Horizon report, pt.2

So I'm just reviewing parts of the Horizon report and want to start talking about the technologies the consortium has highlighted. The first one up is user-created content.

This is a topic we've all heard so much about that I'm not sure that I have anything to add at this moment. We all know that one of the key trends for Web 2.0 are applications like del.icio.us, flickr, and, of course, YouTube. We know what is interesting about these sites are the ways in which people can openly share content that they have either found on the web or have created with others. We know about blogs, Digg, and other sites that let us choose what we want to say and (collectively) what information should rise to the forefront of the web's consciousness. And we know that with all of these tools, the real key is tagging the information in a proper manner so it can collated effectively.

We know some things about how we might use these technologies in our classroom: we could use del.icio.us to create a set of shared bookmarks on, say, a Toni Morrison novel. Each student who is looking for relevant materials could save their locations to a tag or set of tags (say, "English_363" and "Jazz"). We can ask students to write responses to readings or class assignments in a blog as well as to read each others' thoughts on the subject. We can assign a video project throughout the semester and watch its iterations as students upload each version to YouTube.

The two new things I learned in the report were about Zotero, which I've written about already, and an interesting project at UPenn, where they have created their own tagging system to be used throughout the university. It appears that you can tag not only URLs from around the web, but also items in their library catalog and those from subscription databases like JSTOR. What I really like about this idea is the implementation within a library catalog. Have you ever had difficulty finding something in EUCLID? Or, more likely, have you ever wished there was a way to browse the books on a subject without being in the library or having to depend on the Subject headings? Well, if we were able to tag our library's collection, then we would have a way of identifying important texts for ourselves or for the classes we teach and we would have the benefit of others' experience with texts insofar as they may have tagged them in ways that the acquisitions department might not have known about or thought important. You could then follow the tags to other interesting materials or follow the tagger(s) to see what other projects they have been working on that might be of corollary interest to you.

Unfortunately, the Penn system doesn't show the tags within catalog entries, which I think would be ideal. Otherwise, I'm not sure what the difference is between what they've been doing and simply using del.icio.us (except that someone, somewhere can talk about how s/he instituted a social computing at Penn on his/her vita). Perhaps it makes it easier to link to subscription services. In any case, they have a start on creating their own tags and I would hope that this would make it possible for the library to implement them in a new and interesting way.

So, I'm left asking myself how best to use user-created content in our courses. I really like the idea of having students do some blogging or author parts of a wiki to teach them some different writing skills. As students learn the conventions of writing in different media, we gain the opportunity to teach them about differing rhetorical exigencies that pertain to each type of writing. I like the fact that students who write in a shared/open space have the opportunity to receive feedback on their ideas (not their grammar, most of the time) from their peers as well as from their professor. I really appreciate the ease with which these tools work; that they are mostly free; and that the publication of these materials can help students feel like their work is not just a very one-sided conversation between them and the professor.

The difficulty with user-created content--and specifically with tagging--is that it can produce so much material to sort through. If I'm tagging materials on Jazz and if all of my students are, how do we determine which of the articles are worth reading? And if we use the same tags from year to year we will amass even more materials (although with the benefit of having a wealth of materials).And don't even get me started on the problems associated with misspelled tags or the inability to edit them.

I like the technologies and platforms that come along with user-created content. But as with everything, we need to think about how we will use them in the classroom. Hmm. I'm guessing I'll be able to end most every post this way.

P.S. I'm very proud of myself for resisting the whole meta-implications of this post.

Monday, September 24, 2007

Teaching with Google Earth

I'm fascinated by Google Earth, as you may have previously noticed. I was poking around today and found a really great blog with ideas for teaching using the software. A recent discussion there examines using video and audio embedded within Google Earth to teach something like Huckleberry Finn. The blog on the whole seems aimed toward people teaching in K-12, but there's definitely something we can learn by reading other people's ideas for their classrooms.

7 Things You Should Know

So in connection with my reading up on The Horizon Report, I stumbled across another series of documents over at Educause: "7 Things You Should Know About...". This is a monthly series that highlights a particular technology or tool that can and is being used in education and quickly identifies what it is, how it works, and where it is going in the future. Past articles have covered blogs (way back in 2005; they're not THAT out of date), RSS, Wikipedia, and haptics.

If you're not familiar with these tools, it's worth quickly reading through them and learning what they are and what you can do with them.

Friday, September 21, 2007

Digitizing the Analog

I spent some time over the last two days learning to use some of ECIT's audio equipment and software. I decided to learn how to digitize some analog material and so brought in a live recording of the band I played with during my freshman year of college: The Shriners. This was the first recording we'd made other than setting up a bad tape recorder at the back of a venue where we were playing. We used a DAT deck, routed through the sound board of the place where we performed, and then used the recording to make 300 or so shiny red cassette tapes that we sold for $2 each.

Not wanting this piece of nostalgia to degrade any further than it had after 12 years, I brought the tape into ECIT. Using a tape deck, a sound board and Sound Forge, I recorded the tape into a digital file. It took a little bit to figure out the levels of the input because I wanted to avoid clipping, but within 10 minutes I was up and running. After inputting the entire tape (the tape deck I was using was nice and automatically flipped the audio to the other side while I was recording so I only had one track), I used Sound Forge to trim down the silence that began and ended the recording. SF is an very powerful tool, but since I mainly wanted to archive the audio, I used it to save the track out to a WAV file (which is lossless [meaning it captured perfectly the imperfect quality of the tape]) and was more or less done.

I then moved on to another computer to play with Audacity, which is also sound-editing software. My goal here was to change take the WAV and save out individual tracks to mp3. I could have done this with Sound Forge as easily as Audacity, but I went with the latter for a few reasons. First of all, it's a tool Shannon and Wayne have wanted me to learn to use. Second, what's great about Audacity is that it is free (as opposed to SF's $300 price tag), it's cross-platform (SF is Windows only), and it is open source (meaning that it's constantly getting improved by a collaborative community of coders).

I thought that I would have to manually split the tracks of the WAV into chunks before encoding it into mp3, but Audacity has a tool to analyze for stretches of silence. Using this, the program found the breaks in my file, allowed me to label them with the titles of the tracks, and then export to mp3. The process took at most 3 minutes. And you can hear the results here. Just think of the career I gave up in upbeat-based music to come and spend time at Emory.

In any case, the lesson learned is that it's VERY easy to come in and digitize older materials (LPs, cassettes, VHS, etc.) for archival use (if you own the copyright) or for use in the classroom. As Emory moves more toward iTunes U, these tools will also help us splice, edit, and improve (or add wahwah) to the recordings we are making.

Ways to Waste Time with Google

I don't know that any of us need to waste any more time. But just in case you do, here are two things I recently discovered that Google and its co-brands are trying to seduce us with.

First is Blogger Play: It's basically a stream of photos that have recently been uploaded to various blogs hosted by Blogger. You can read more about it here. It's fun to jsut sit back and think about what people might be saying with or about these images.

Second is Google Image Labeler: This is an interesting game-like application. Basically you are given a series of photos and asked to provide descriptive labels for them. You have 2 minutes to label as many photos as possible. The catch is that you are working simultaneously and blindly with a partner. You don't get to move on until you both match a label. The more descriptive your matched label is, the more points you get. At the end of the time, you get to see your partner's suggestions and the ones you matched with. Your score accumulates and--if you were to play long enough--you could have one of the top spots. So why is this out here? It's basically helping Google refine their image search, giving them agreed upon labels for particular images. Smart.

What I find interesting about both of these applications is the way that user-created content (uploaded photos and photos the Google Bots find across the web) becomes the object of attention in a way that differs (slightly, in the case of the first, and more so, in the case of the second) from other Web 2.0 sites such as Flickr or YouTube.

EndNote Alternative

I've been a fan of EndNote since I discovered it about 3/4 of the way through my undergraduate career. It's a great tool to keep all your notes about a particular project or all the research you've ever conducted in one place. That it can help you build bibliographies is just icing on the cake, in my opinion. Oh, and for those of you who don't know, you can download EndNote for free from Emory's software servers.

I just discovered a new Firefox-based alternative to EndNote, however: Zotero. It serves most of the functions of EndNote, except that it works within your browser and can automatically create records from many of the pages you're looking at (like the Emory Library Catalog or subscription databases like JSTOR). You can also export your bibliographies into documents.

The next version of the software will allow you to share files/references with others/groups of scholars and even use RSS feeds to get the latest information that, say, the Nineteenth-Century Fiction Association might want to make available to its members.

This looks like a really interesting tool. You can watch a video tour of it here. It's a project supported by, among others, the Mellon Foundation, so you can trust its credentials in that sense.

At the moment I see two drawbacks:
  1. Firefox only.
  2. It's based in the browser and NOT in an online account. What this means is that if I use a computer at Emory for my library, it won't be on my computer at home. Of course, it's very easy to export the files. But it would be nice if you could log in to your Zotero account on matter where you were on earth. Hopefully this is something they will think about including in the new version of the plugin.
Still, this is a good sign of technology to come to help us become more effective researchers.

Tuesday, September 18, 2007

An experiment...

So, thanks to the tools BlackBoard provides me, I know that not a lot of people have been looking at this blog yet. I can't really blame you since it's not like I'm dishing on the latest in celebrity fashion, music, or even academic discourse.

But another reason, I suspect, that people haven't looked at what I've been writing is because you have to go through so many hoops to get at the BlackBoard blog. It's not a bad tool here (although design elements are obviously limited), but it is a restricted interface. Even more problematic, there's no RSS feed to alert those of you who use Bloglines, Google Reader or something else as your primary blog reading device can't get updates. And since part of what I'm doing here is thinking about how to make the tools I'm using useful for others, it seems obvious that I'd want a community to respond to the writing. Finally, while Blackboard offers me a lot of control over who can and can't see what I'm doing, there's not an easy way that I could show off this blog as part of my scholarly portfolio (whether I'd want to is another thing we can take up at a later date).

What's this all mean? Well, it means that I'm going to host this blog simultaneously here and on Blogger. I want to see if traffic increases to one or both portals when I make a second one available. So you'll be able to continue finding the blog here as well as at http://ecitadventures.blogspot.com/. Take you pick. Sometimes I'll write a post in this environment and sometimes in the other, just to see the strengths and limitations of each. The trickiest things will be the links to hosted content. I'm using Blackboard's Store & Share feature for files you can download (like the Camtasia movie or the Google Earth kmz files). Right now the links in Blogger point back to Blackboard. You should be able to access the materials by logging in; but I can't promise that for sure. I'll fiddle around with my Emory web drive and see what I can throw up there to make sure the materials are available.

In any case, feel free to tell me if you like one platform over another or if you just don't care. C'mon. I promise I can take it. I still don't really think I'll get many comments. The idea is to get the information out there and available and are interested.

Update: I've changed the links here at Blogger to point to my Emory webdrive rather than the Store and Share on BlackBoard. Let me know if you have any problems getting those links.

Horizon Report, pt.1

Wayne gave me a copy of The New Media Consortium/Educause Learning Intitative's Horizon Report about two weeks ago, and I'm going through it again. The report is a yearly product of the Horizon Project, which "seeks to identify and describe emerging technologies likely to have a large impact on teaching, learning, or creative expression within higher education." Obviously this is something that we at ECIT care about and something that I'm personally interested in exploring.

I have to admit that I haven't read a lot of strategic reports in my day, so I was a bit taken aback by the identification not only of six "Key Trends"--including such things as "The environment of higher education is changing rapidly" and "Academic review and faculty rewards are increasingly out of sync with new forms of scholarship"--but also six "Critical Challenges" facing higher education over the coming five years: #1, "Assessment of new forms of work continues to present a challenge to educators and peer reviewers"; #4, "There is a skills gap between understanding how to use tools for media creation and how to create meaningful content." I think their identification of these issues is spot on--especially those about how we as educators both teach students to use the new tools they have and how we then evaluate (read, "grade" or "peer review") the products our students and colleagues produce. I do think, however, that they could have cut some of the business speak.

Based on these Trends and Challenges, the report has identified 6 technologies that are on, predictably, the Horizon of higher education:

  1. User-Created Content
  2. Social Networking
  3. Mobile Phones
  4. Virtual Worlds
  5. The New Scholarship and Emerging Forms of Publication
  6. Massively Multiplayer Educational Games
As I'm reading through the report, I'm trying to think about how Emory, ECIT, and I can start to think about implementing these tools and technologies in our classroom. Feel free to throw in your own thoughts.

iTunes U Transcoding

I know we're a long way out from fully implementing iTunes U, but one thing that we have already discussed is coming up with standards for the files we will allow to be uploaded. Do we want to make things as easily playable as possible? Then we should perhaps consider using mp3 for sound. AAC and WMA are two other widely used formats, but it is only the rare media player (and especially a portable media device [aka iPod or Zune]) that can play them both. For this reason, mp3 seems to be the safest. But what bit rates to encode at? Do we use CBR or VBR mp3? And I could go on. This problem gets a WHOLE heck of a lot more problematic when dealing with transcoding video, as the format supports are even more varied and the terms of the conversation become even more esoteric.

The role for ECIT will be to provide students and faculty with the tools they need to convert audio or video files into the iTunes U supported format(s). Since our brief is to teach people how to use the technology for themselves, we need something that is not too difficult: preferably with a nice interface. VisualHub is a good example of this. A downside of the product is that it is Mac only. But it is relatively affordable at (as of today) $23.32 US. I haven't found a good example of Windows-based, easy transcoding software. Jim Kruse has suggested that MediaWorks might provide this. This software is nice because it is dual-platform. I haven't sat down to fiddle with it, however, so I can't speak to its abilities yet.

But since we need to be able to work with any format that people bring in, we need to have some backup software that can handle anything that we throw at it. So far, I think that I've identified two options, both of which are free and both of which, unfortunately are Windows only. My personal vote is for MediaCoder. It's interface is nothing to write home about and you can get lost in the options. However, it has a number of "Extensions" built into it that offer more or less one-step transcoding of files into formats suitable for a variety of different devices. What's more, you can design your own Extensions very easily (although sorting through all the options is daunting). My mp3 player hasa built-in mic, and I use MediaCoder to transcode the WAV files into manageable mp3s.

Another option, with an even worse interface is Super. I've not had much experience with it, but it also appears to be a powerful tool to get things from one format to another.

I'd give MediaCoder an 85 and Super a 93 on the User Learning Curve (ULC), where 0 is my grandmother and 100 is the evil spawn of Sergey Brin and Larry Page. They are not easy to use or learn. However, I think their advantage is that they could be distributed for free to the entire Emory Community via Software Express or the EOL CD. And if my dreams of guerrilla recording are to come to fruition across the campus, then we'll want everyone to have the tools in her/his room to get the materials uploaded to iTunes U as soon as possible.

If you're interested in seeing what other software I'm finding for the purposes of transcoding, check my list of ECIT+transcode bookmarks at del.icio.us.

Camtasia

I've already had a fair amount of experience with iMovie HD and iDVD, given the work that I have done recently for Emory's Writing Center (although I still haven't gotten to the bottom of the awful 34506 error). So the first piece of software that I've tackled from my list is Camtasia. Camtasia essentially allows you to record everything you do on your computer's screen. You can then use the tools within the software to edit the recording, to add zooming, highlights, and other effects (noticeable feature lack: star wipes). You can also add voice narration, captions, and quizzes. It's more or less intuitive when you sit down with it (although the tutorial videos DID teach me a few new tricks). You can use the software to create a tutorial for different tasks or you can use it to turn a PowerPoint presentation into a self-contained movie (with narration, again).

To learn something about the software, I decided to film myself building some of the Google Earth tools I used in my talk last week. I did this ex post facto, but it cemented some of what I had already learned about Google Earth. I then produced a movie and threw in as many of the tools as I could. You can watch it here.

I learned a couple of things in doing this. First, there is a limit to how long you can record within Camtasia. And if you don't stop before this time elapses, the recording will stop and you will lose everything that you've done. This happened when I filmed a 15 minute or so segment of playing with Google Earth. There's a chance that I missed something, but I've yet to see it made explicit how long you can record nor what determines this recording length. Second, even though I tried to go through the motions of building my maps rather slowly, it was still too fast when I sat down to record the narration for the video. It seems, for this reason, that if you really want to create a tutorial, you should aim for step by step instructions and perhaps create a different movie for each one. It's much easier to work with five 30-second segments than with a 2.5-minute segment. The whole thing would have gone smoother as well if I'd had a script planned out ahead of time for both my motion and for the narration. This will allow you to time things better. However, the newest version of the software, which ECIT will get soon, has additional features that help you get around moments when you find you are talking longer than you thought you would in a particular segment.

What's the pedagogical application for Camtasia? Well, you can create tutorials for your class if you need them to learn how to use software and would like to post refresher videos online for them to review. You could use it for review materials for your class (slapping together some PowerPoint presentations or a new one) and even include assessments (quizzes), so students can see if they really know the information. Other uses? I'm going to have to keep thinking about it. But the results you can get for either of these two goals are really impressive.

P.S. Now that I have finally finished producing the project, it looks like not all of the elements synced up in quite the right way. The zoom and the video seem to be in sync. However, the captions are ahead of the narration, and the yellow highlighted box (what Camtasia calls a "call out") is late from where it should be in the video. The call out is, however, synced with the vocals as I compare the project in Camtasia and in the flash file that I've posted here. The quiz seems to show up at the right time in the narration and in the video, which seems strange.

In any case, watching it right now gives me a slightly queasy feeling. It's all off slightly. Wayne and I tried exporting it to .wmv instead of flash (.flv), but that didn't seem to fix anything. (In fact, there was no sound and the frame rates were badly off.) So I'll continue to fiddle with this. It's obviously not a good tool if you can't get all its elements to line up with one another.

Mapping My Work

Last week I had the chance to sit down for several hours with the library's new GIS librarian, Michael Page, to learn the inner secrets of Google Earth. This was all related to a talk I was giving for the Emory Psychoanalytic Studies Program on the last chapter of my dissertation. To sum up very briefly, I did a reading of William Gibson's novel Pattern Recognition that shows how the novel depicts both trauma and technology in terms of speed. I wanted to make an argument about how fast communication technology works to enable information to move from one character to another, but in order to do so I needed to know how far the characters traveled within the environs of London, specifically Camden Town. (On my pitiful stipend, I couldn't afford the research trip to London.) I at first turned to Google Maps, but then tried Google Earth to get a better idea of what the area looked like. I wrote the argument into my chapter and thought I was done.

When the talk suddenly loomed on my horizon, I wondered if I could bring the pictures of Camden Town in with me to help make the argument more convincing. People wouldn't have to take my word for how little time elapses in the particular passage I was concerned with, but could see it. One thing led to another and spilled over from some other research I'd done in connection with New York City (where, again, I've never been) for the chapter, and I decided to map four different parts of the novel: Camden Town, Tokyo, and New York on 9/11 and on 9/19/01. Michael helped me learn how to use paths, fly throughs, and other tools to dress up my maps. You can see the results here. Just download the .kmz files and open them in your own Google Earth application.

I was very pleased with the results of the talk. At first I thought the technology would help to keep people's attention from what I was saying. ("Ooh, pretty 3D renderings!") In actuality, I think that it helped me to make my arguments effectively and to also help those who were unfamiliar with the novel and its plot have a better, more concrete, if you will, sense of what happened in the novels. I was no longer simply reading a paper at people; instead they were watching the areas of the novel come to life. A prime example of show, not tell. Finally, I think interacting with the computer forced me to go off script and to talk more like a human. This is always a plus when doing a presentation like this. For this reason, I don't think I would ever turn these presentations into movies, although Google Earth Pro offers that functionality. It seems to me that it was important for the audience to see me interacting with the software--even changing the settings--so they got a sense of the tactility, for lack of a better word, of the environment I was using. And hopefully it gave them a sense that this was something they could do themselves.

I'm really interested in continuing to think how our reading of novels would differ if we use mapping techniques like this. I'm going to lay the blame squarely on Franco Moretti's shoulders.

Getting My Feet Wet

So. The suggestion for this blog came from Wayne, who thought it might be an easy way to keep track of everything I learn and experiment this year in ECIT. I've been tasked with the rather harsh job of beginning my time here by learning the following software:

1. iMovie
2. iDVD
3. Acrobat Pro
4. Dreamweaver
5. Audacity
6. Wimba (via Blackboard)
7. Camtasia
8. RealProducer
9. Interwrite PRS

I know. It's rough. I for one welcome my new ECIT overlords.