Hello, Digital Sea Monsters enthusiasts! My name is Tony, and I’m one of the programmers working on the Marine Megafauna iPad app. I’m writing to this blog to give anyone who’s interested an insight into where the app currently stands and into the development process in general. Last week, my fellow team members (Adam, James, and Sophia) demoed an early prototype of the application for Dave and Nicole.
The app is meant to have a public “Megafauna of the Day” feature that anyone can access; this feature was the main focus of our prototype. Here are some screenshots of the Megafauna of the Day feature running on an actual iPad, these are not just mockups:
[image title=”Interface 1″ size=”medium” align=”left” icon=”zoom” lightbox=”true” autoHeight=”true”]http://superpod.ml.duke.edu/digital/files/2011/02/mod1.png[/image] [image title=”Interface 2″ size=”medium” align=”right” icon=”zoom” lightbox=”true” autoHeight=”true”]http://superpod.ml.duke.edu/digital/files/2011/02/mod2.png[/image]
The users can choose from the photos (and eventually videos and audio clips) in the media browser at the bottom left. Whichever thumbnail is tapped is loaded into the main image space just above the media browser. If the users can tear their eyes away from the pretty pictures, they can read all about the featured megafauna in the scrollable body of text on the right. In the second screenshot you can see the popover menu that appears when the user pressed the button at the top right. Megafauna from the past few days are accessible from this menu.
The other side of the app is the private, course-centered side. Protected by netID authentication, students enrolled in the course can browse and view the readings and lectures from the week.
Here are some screenshots of the Readings feature, also actually running on an iPad:
[image title=”Readings Interface 1″ size=”medium” align=”left” icon=”zoom” lightbox=”true” autoHeight=”true”]http://superpod.ml.duke.edu/digital/files/2011/02/reading1.png[/image] [image title=”Readings Interface 2″ size=”medium” align=”right” icon=”zoom” lightbox=”true” autoHeight=”true”]http://superpod.ml.duke.edu/digital/files/2011/02/reading2.png[/image]
The user can pick any of the PDF readings from the browser shown in the first screenshot. After selecting a particular reading, the user can type notes, highlight, or even scribble on the page. These annotations are saved for whenever the user loads the reading again.
None of the images, text, or PDFs shown above were pre-stored on the iPad. All of them were fetched over the network from a test database we set up!
Here are a few of the things we are currently working on:
- setting up a database for all the content on a server Dave has provided for us
- user interface improvements that Dave and Nicole both suggested
- various performance and memory-usage optizimations
- and of course, the lectures!
Here are some mockups of how we want the lectures to look. The lecture viewing features have not been implemented yet, these are simply concept images:
[image title=”Lesson Interface 1″ size=”medium” align=”right” icon=”zoom” lightbox=”true” autoHeight=”true”]http://superpod.ml.duke.edu/digital/files/2011/02/lesson2.jpg[/image]
[image title=”Lesson Interface 2″ size=”medium” align=”left” icon=”zoom” lightbox=”true” autoHeight=”true”]http://superpod.ml.duke.edu/digital/files/2011/02/lesson3.jpg[/image]
We should have an updated prototype with freshly implemented features to demo for Dave and Nicole in a week or two!
[image title=”Lesson Interface 3″ align=”center” icon=”zoom” lightbox=”true” size=”large” autoHeight=”true”]http://superpod.ml.duke.edu/digital/files/2011/02/lesson4.jpg[/image]