augmenteamed

ubiquitous computing & pervasive interaction design

System Concept & Description

Our system consists of three main parts: cloud-based data collation software, an advanced documentation system, and multiple tablet-sized touch interfaces available throughout the space.

All Hands Active would be fitted with many small cameras and microphones, which would be integrated within the space’s infrastructure. In addition to taking pictures and recording voices, the cameras would be linked with facial recognition software to recognize individual members. Chips would be embedded in the tools and materials, which in turn could be interpreted by a network of sensors integrated throughout the space. The gathered information would let the system interpret which tools and materials were being used by each user at any point. The system would also record location information at a fidelity much higher than what is available today.

All of the data gathered from our advanced documentation system would be sent to cloud-based data collation software, which would interpret it and add the data to member profiles and to project profiles, which would be linked together, so that members would be associated with the projects that they had completed.

The documentation system would be controlled by users through touch-screen tablets that would be scattered throughout the space. These tablets would also sense location, keeping track of where users were in the space, as well as showing users projects associated with different areas of the space. The tablets would also be the primary interface by which members would access project and member profiles.

All of the system’s components would work together to help members more easily document their projects and activities. Once these data were being collected, they could then be organized, displayed, and ultimately used to facilitate social interaction between members, meeting all of the identified needs for this space.

Demo Description

For the demo, we wanted to show two users effectively using all the parts of the system, and show how it helps them in their tasks at All Hands Active.

In our demo, our first user, Sylvia, enters All Hands Active and uses a tablet interface to access the recording part of our system, and sets the system to take photos of her automatically as she works on a project. Once she has completed her work, she puts down the tablet and walks away. A second user, Jeff, enters, and after some deliberation about what to work on—a common problem at All Hands Active—uses the tablet interface to find projects associated with different physical spaces inside All Hands Active. He settles on Sylvia’s project, is able to get basic information about her project via the tablet interface, and is also able to access Sylvia’s maker profile. Soon after, Sylvia re-enters the scene. Armed with information about her from her profile, Jeff introduces himself and the two quickly are able to start working together on a project.

To create this demo, we used two cameras with flexible GorillaPod brand tripods as decorative props (they are non-functional in our demo), an Arduino kit as a decorative prop (also non-functional, representing an in-progress project), and a series of many linked prototype screens that are displayed on two different iPads, representing the portions of the interface that Jeff and Sylvia access. We created these prototype screens in Adobe Fireworks, using that application’s linking capabilities to join screens together. We also had to write some custom HTML and JavaScript in order to make the prototype auto-advance at certain points, and to display more smoothly on the iPads. We also took photos of one of our demo participants, Sylvia, working at the same work table we use in the demo, to make it appear as if the system was taking photos of the user in the actual space she was in during the demo.

Discussion

Our demo captured most of the user-focused aspects of the system and covered the ways that users would interact with the system to accomplish various tasks, such as browse through projects, view member profiles, and record photos and other information for projects. This was our primary intent. The aspects of the system that were not covered by our demo were the technical pieces of the recording. We did not make any functioning system to record photos or to link them to online profiles in a cloud-based service, as this was beyond the scope of this class; however, we believe that we sufficiently demonstrated what would be required to build such a system. Another aspect that our demo did not simulate was the idea of smart tools and materials, which we hope will be available in 5–10 years, but which are not easily accessible now. Our demo functioned well as a simulation of the user experience, but not as a fully functional prototype of our system.