After performing our Formative Study, we refined the scope of our project to augment All Hands Active through three specific ideas that we represented in storyboards describing smart photos and smart tagging, smart tools and augmented reality safety glasses, and physical ambient gamification. These storyboards are described in detail in our Formative Study. These storyboards displayed specific details of the interactions that users would have with each of our systems, and our next step was to select the one we felt best met the criteria we had established—acceptability, cultural fit, applicability, resonance, and demonstrability—during the Formative Study.
Our group created a set of experience prototype scenarios based around the idea of smart tagging and smart photos. Our storyboard had described a system that would help users take photos in the All Hands Active space automatically. These photos could then be printed out on individual, multi-touch screens that were of a similar size and shape to a Polaroid photo, and that could be hung around the space. The photos would be “smart tagged” with information gathered by the system as the user worked, including information on which tools and materials were used, how long each step took, who was working on the project, and other information. This smartly-tagged information could then be accessed by other users by looking at the miniature photo displays, which could be integrated into the somewhat chaotic environment at All Hands Active, and provided a high degree of customizability, in that they could be placed anywhere, and grouped meaningfully (or not) by users as they saw fit.
This idea met all of our criteria: we saw broad applicability across all members, a high degree of acceptability to privacy-concerned members, something that was easily demonstrated, something we thought would resonate with our audience based on the success of our cultural probe in the Formative Study, and something that with its malleable and slightly chaotic nature would be a good cultural fit for All Hands Active.
Speed Dating Matrix & Scripting
In order to test different dimensions of this idea, we created a “speed dating matrix”, as described by Davidoff, et al (2007) in Rapidly Exploring Application Design Through Speed Dating.
One set of dimensions was “Activity”. We wanted to test our system in different situations, so we decided to use some of the activities from our Initial Concepts: instruction via social facilitation, instruction via computer-aided tutorial, and documentation of activities.
Another set of dimensions was the proactivity of the computer system. Davidoff, et al (2007) described this in Rapidly Exploring Application Design Through Speed Dating, and we thought it was relevant here and wanted to understand the ideal level of proactivity of the computer system. We decided on three arbitrary levels: high, medium, and low. In highly proactive situations, the computer would do everything it could automatically, with minimal user input. In low proactivity situations, the computer would still provide advanced capabilities—such as knowing which tools were used on a project for example—but would require user initiation before acting.
The final set of dimensions revolved around the physicality of the system. Our research suggested that a series of small displays would be useful and would be a good cultural fit for a makerspace, but we wanted to test this theory, and therefore included them in our storyboards. We decided on having one set of scenes include only a large, centralized display such as might be available today, another set of scenes include only the miniature photo-sized displays, and a third set including a mix of both screen sizes, working in tandem.
We drafted 27 brief scene descriptions on our matrix, but knew that because of time constraints and to avoid exhaustion on the part of our participants, we could only perform a subset of these scenes. We decided on nine scenes that we felt represented the gamut of all the ideas present in the matrix. These ideas are highlighted in the matrix above.
Experience Prototyping Enactments
Below are summaries of the nine scenes we wrote:
- User works on a project at a workbench and the computer system automatically takes photos and tags them with relevant information. The photos are shown on the large display where users can review them.
- User works on a project at a workbench and must prompt the computer system to take photos as they work. The photos are shown on the large display, sans tagging data, and the participant is asked what information, if any, they would take the time to tag the photos with.
- User enters the space and consults the large display. The computer knows the user is working on an Arduino project and suggests an expert member who can help them with the project. The expert member is notified and the user is given a map to their location.
- User enters the space and wants to find a new project to work on. The user finds an array of project profiles and selects one, but finds that there is no tagging data present.
- User works on a project at a workbench and the computer system automatically takes photos and tags them with relevant information. The photos put onto a small photo display which is printed and given to the user. The user then moves to a different workbench, where more photos are then taken. The user is given the option to hang up the photo displays, keep them, or recycle them.
- User works on a project at a workbench and must prompt the computer system to take photos as they work. If prompted, the computer provides small photo displays, which can then be tagged by the user if desired.
- User enters the space and approaches the 3D printer, which has several small photo displays hanging around it. Some picture members, along with skill levels in different disciplines. The user selects one, and is then given a map to that user within the space, as well as additional information about the user.
- User enters the space and approaches the 3D printer, which has several small photo displays hanging around it. As the user approaches, the computer discerns the user’s skill level and highlights a few skill appropriate projects. The user can then interact with a computer tutorial on the small display.
- User enters the space and approaches the large display, and brings up a directory of members, which shows their skill levels and number of projects completed in each discipline. The user picks one of the expert members, and then a number of small photo displays that are scattered through the space highlight. The system informs the user that pictures of the expert member and his projects have been highlighted, and the user can now contact the expert member, or can browse the highlighted photos.
To perform our enactments, we created a low-fidelity facsimile of the All Hands Active space inside a graduate student meeting room at North Quad at the University of Michigan. We used the large whiteboard inside the room to draw shelves, lockers, and a 3D printer. We pushed tables underneath the drawings of shelves to give the idea of workbenches. We put pens, tape, paper, pencils, markers, and Arduino parts on the workbenches to simulate the chaotic nature of the work spaces at All Hands Active.
To simulate the large display screens, we drew individual screens on pieces of butcher paper that were large enough to be taped in front of the 50” television screen in the meeting room, and changed sheets when necessary in the script (such as through user interaction). When the screen was not present, in scenes designed to test the effectiveness of the many small displays, we placed a blank piece of foam-core board in front of the screen to simulate it being “off” or otherwise unavailable.
To simulate the many small photo displays, we cut out several dozen small rectangular pieces of foam-core board in the approximate correct size, and drew necessary screen information on them. We wanted to be sure to simulate the idea that there would be many more of these present in the space than the few that the user would be interacting with, so we included many purely decorative screens, that looked informative but would not be interacted with by the participants.
At some moments in our script, we had to in some way highlight certain specific small photo displays, and in these instances we did so by circling them with a colored whiteboard marker, to make them stand out from the other, more decorative photo displays.
We began with a short interview to gauge participants’ familiarity with maker spaces, and their feelings about photo sharing. After every two scenes, and after the final scene, we asked the participants a few brief questions, centering around their feelings about the difference between the two scenes they had just participated in. After Scenes 5 and 6, in which we switched from a more familiar large display to the more novel photo displays, we asked a few additional questions about the differences between using the two types of displays. At the end of all the scenes we asked a few follow-up more questions. The full list of our questions is available in our moderator script and questionnaire.
During each session we had one person acting as a moderator, describing the scenes, motivations, and actions, and performing the follow-up interviews. We had one person acting as the “computer” that spoke with the participants when necessary, “highlighted” the small photo displays, and changed the large displays. A third person acted as other members at All Hands Active in the scenes that required this. A fourth person took photos and notes. When a fifth group member was present, one person took notes and the other took photos.
After all of our enactment sessions, our group consolidated our notes and had a meeting in which we drew out themes from our observations, photos, and notes, and documented these themes.
Participants & Logistics
We had five participants. All of them were current members of All Hands Active except for one, who was a member at a different local makerspace, and was very familiar with All Hands Active. All were male, and all were 20–40 years of age, which fits the demographics of All Hands Active. We had a small range of opinions on photos and photo sharing, with some members being perfectly fine with tagging and being tagged in photos on existing social networks, and others who did not like participating in tagging and being tagged in photos on social networks. It was important for us to find participants with this range, as we wanted to be sure to test the acceptability of our idea, as this was one of our defining criteria.
Our shortest session took slightly under one hour, and the longest took approximately 90 minutes. All the sessions were conducted within one week’s time, and all within the same space, using the same materials and whiteboard drawings.
One of our most important findings was that the desirable proactivity of the system was quite situational, as all our users preferred different levels in different situations. To organize the rest of the findings we created four broad categories relating to different parts of our matrix during our final interpretation session: General, Documentation, Large vs. Small Displays, and Learning From People vs. Learning From System.
Context & “Situatedness”
One of the first and most important findings was that users’ preferences for different parts of the matrix depended almost entirely on other contexts. In the setting with the large display, participants universally liked the more proactive system better, but when using the small displays, participants universally liked the less proactive system better.
“I like that technology is out of the way.”
— , Regarding large displays
“It would be nice to have both the computer capture what I did and the flexibility for me to manually change some setting.”
— , Regarding small displays
We also found that one of the aspects of our enactments that all users liked best was the idea of information being situated in a meaningful spatial context. One of the best and most consistent reactions we had was to the idea of small photo displays that were related to 3D printing being located near the 3D printer.
“I have idea what to do, but I don’t have the experience. The computer guide me to another person, show me someone is there who is able to give help. If I am stuck, at least I know someone can help.”
— , While near the 3D printer
Users also all responded well to our final scenario, in which they were able to browse a list of user profiles and then see information about that user situated in a physical context within the space, creating almost a “physical profile” for that member.
“I like the combination of having both big screens and scattered small screens…but moving to the smaller displays “felt like lurking”—I felt that I knew enough to break the ice already.”
All of these findings helped us decide to add a “situatedness” criteria to our list, as described in the next section, Ideation & Selection.
Re-examining Cultural Fit
Our initial observations suggested that members of All Hands Active appreciated the somewhat chaotic nature of the space, and would appreciate a solution that embraced this attribute and integrated well with it. What we found however, was that the current disorganization of the space was seen by our participants as a problem and not a feature. Participants were most interested in solutions that helped to organize and order the space, not situations that added to the clutter. After our enactments, we decided to remove “cultural fit” from our list of criteria, and focused on a solution that would be both situated, but also would add to the organization of the space rather than detract from it.
Reinforcing Utilitarian Focus
During our Formative Study, we discovered that although social interaction is an important part of the All Hands Active experience, the focus of members is on the projects that are built. Social interaction is important, but is mostly seen as a means to an end of working together to build great projects. This was reinforced during our experience enactments. Our users liked utilitarian scenarios better than socially-focused scenarios, and in social scenarios, they focused on the details of how to make the social interactions more useful.
“There are a lot of helpful people in AHA but I may not know who they are or what kind of projects they are currently working on.”
“I like that it guides me to people who are experienced and can tell me what to do next.”
Privacy Only Slightly Important
While some of the users expressed caution about tagging and being tagged in photos online, and while we had heard in our Formative Study that some members might be wary of being recorded, we discovered that at least in the context of our experience walkthroughs, this simply was not true. We asked all of our participants if there was ever a moment where our system went too far, did too much, did anything that felt “creepy,” or crossed any sort of privacy line. None of our participants indicated that they had felt this at any point, despite us showing systems that photographed them and often automatically tagged them, in addition to providing locative data about them to other members. We thought after our Formative Study that we would have to be exceptionally careful about privacy, but our experience prototyping did not support this.
“I would hang up the photos I liked at the end of the day to share with the public…they would be tags and notes for public use.”
“It would be really beneficial for us to have photos and to have timestamps on them.”
One thing we did discover was that although all of our users accepted the general idea of other members being able to consult them for help on projects, they all indicated a desire to be able to set a “status” of some sort that might prevent them from being consulted if they indicated they were busy or otherwise unavailable. We were able to integrate this idea into our criteria of customizability, as described below.
“Most of the people in AHA are willing to give help, but it would be better if the system knew when the users don’t want to be disturbed.”
Voice Activated Computer is Cool, but Potentially Disruptive
For the purposes of our experience prototype, we used a computer system for which the primary method of interaction was speech. All users liked this idea, but several also expressed worry that with several members in the space, this could quickly become overwhelming and disruptive. Based on this, in our tablet interface we would have on-screen controls that would not require voice activation to use.
“I like that asks for my permission at first and then to have it in the background.”
Documentation is Useful
We hypothesized that the automatic documentation aspects of our system would resonate with the target population, and the experience enactments support this. All participants felt that recording photos and project information was both useful and helpful. We will continue to provide advanced and thorough documentation features in future iterations of our prototype.
“I like this! We always want to do something like automatically report what is going on here.”
Utilitarian Tagging is Most Desirable
While users did often indicate that they would tag themselves in photos that were taken, they were most interested in utilitarian project information, such as the step by step flow of a project and the tools and materials used. Based on this information, we decided that for the next iteration of our prototype we would have the system proactively tag utilitarian information, but leave other, more social- or author-focused information up to the user to tag themselves if desired.
“Having a tagging function is good—it’s a reflective practice. For the tags, I would like to add timestamps and what tool I am using.”
Documentation Should Be Flexible
While we wanted to discover whether users liked proactive or less proactive documentation, what we discovered was that it varied based on almost every other part of our matrix. In some situations users wanted automatic, proactive documentation. In others they wanted less proactive documentation that only activated when they requested it. This is valuable information that we used to inform our new criteria of customizability, and will use when designing our further prototypes.
“I like it when the computer prompted me to take a photo—this can help record important moments.”
Large vs. Small Displays
Information Design More Important than Form Factor
One general finding was that participants seemed not to care as much about the form factor of the display specifically and cared more about the type of information displayed. As an example, users liked the arrays, lists, and galleries displayed on the large displays, but did not express that they enjoyed the large display because it was larger and centrally located, only that they liked the information provided. We used this information to decide on a tablet-sized form factor that could conveniently display both the gallery-type data we provided on large displays and the individual-object-type data we provided on the large displays.
“I’d be interested in knowing more about what tools and materials are useful and where to find them.”
Small, Situated Displays are Tools, Personal Objects
The participants expressed that smaller displays felt more like personal tools, especially when providing data that was specific to its physical location within All Hands Active. The participants liked that they could move it around with them. They also felt less worried about looking at personal information about other members or other projects on small displays, as they are more personal. Users also liked the idea of a map appearing on a small display that they could carry with them, but not on a large display. We used this information to inform our decision to switch to a tablet-sized form factor, that maintains all of the benefits of the small displays, while decreasing size constraints of what can be displayed.
“Manual objects felt personal and private.”
Users Like “Combinable” Displays
One small, futuristic innovation included in our scenarios was the ability to “combine” displays by putting them next to each other, which would turn them into one larger displays. All users enjoyed this idea, so we will attempt to include it in future prototypes.
“The grouping feature enables me to combine projects or ideas together or watch them side by side. Grouping feature provides flexibility.”
Decoration Not Important
Fitting with our findings about the utilitarian nature of the space, users were not very excited about the decorative potential of the small displays. Users liked the displays that showed project photos and gave useful information about members, but did not show much enthusiasm towards more social photos.
Learning from People vs. Learning from System
We wanted to test whether members preferred computer-aided tutorials or socially facilitated instruction. We found that our users enjoyed both, but focused far more on social facilitation, and thought it would usually be more worthwhile to ask a member about their project rather than get a computer-aided tutorial for that same project. We will be able to use this information when designing the specific interactions of future prototypes.
“I like the computer to guide me to a person who can teach me rather then the computer walk me through how to use it.”
Given the time constraints, we spent less time on ideation during this milestone, instead focusing on synthesizing our study results to further refine our existing concept. We started by generating a revised list of criteria based on our study results:
- Depending on the project or situation in which a member is using our system, they may wish for the system to perform in different ways, for example automatically recording documents and applying tags, or only recording after being cued. A successful system will be customizable to the user’s needs.
- Given that All Hands Active is used for everything from knitting to painting to 3D printing to programming to neuroscience kits, we need to make sure that any solutions we propose will have some degree of utility to all members and not just a specific group. Help the users build on their own projects.
- The members of All Hands Active are a unique, and in some ways, particular group, with a broad spectrum of user types. Social learning and social education are an important part of the group, and we need to come up with an idea that excites the members and enforces these activities.
- Because buy-in from the space is so important, we need to design a prototype that convincingly speaks to our audience in a design language that they can understand. A prototype that involves too much hand-waving and “magical” actions (nanobots, etc.) may turn off our audience.
- Our system needs to be able to provide contextual information to users related to where they are within the space, what has been done before in that location, who the user is and what their past history is, and what type of project they are working on. The system needs to integrate with different types of projects equally well.
After defining our criteria, we had one group brainstorming session in which we individually developed ideas for refining the concept, presented our ideas to the group, discussed them, evaluated the pros and cons, and then decided on our final direction.
Our revised system proposal includes two aspects: documentation and access. It maintains many of the existing features of the system we tested, and as in the formative study, we hope that it will be broadly applicable to all types of activities, and will be reasonably demonstrable. We also have incorporated many of the things that users liked—and eliminated things they did not like—from our experience prototyping sessions in the hope of increasing the resonance of our system with our users.
The documentation aspect would be an advanced set of recording equipment. Many small location-aware cameras would need to be installed throughout the space, and tied via a network connection to the tablet interface and a cloud service that would host the documented information. The system would also require an advanced set of integrated sensors in order to sense when tools and materials were moved and thus used. These sensors would need to be tied to the rest of the system so that they could activate when a user wanted to start recording, sense if they were near the user, and if they had been moved, in order to sense when they were in use.
The access aspect would consist of many tablet-sized screens scattered throughout All Hands Active. There would be several tablet docks built into various parts of the space, both to charge the tablets’ batteries and to encourage users to replace the tablets when finished with them to maintain the neatness of the space. These tablet screens would have a custom interface on them that could display galleries of both users and projects. User galleries would include data about the projects they have worked on and about their areas of interest and expertise. Project galleries would include data about the tools and materials used, and possibly who worked on them, if that information was provided.
Importantly, these tablets would need to be micro-aware of location, and the information displayed would need to change based on their location within All Hands Active. If a user picked up any tablet and carried it over to a 3D printer, for example, the tablet should show 3D printing projects. If the same tablet is moved to a workbench, projects should show up that were completed near that workbench. This will help fulfill the “situatedness” criteria described above.
The tablets will also be the main interface for activating the recording system, with on-screen controls that can be set to take individual photos, or to take photos automatically, or to take video. These controls would also let users turn on and off sensor recording features about tools and materials used. This interface, if implemented correctly, will help fulfill the customizability criteria described above.
Our demo will demonstrate the system described in detail above. The main pieces of our system that are missing from existing technology are small and cheap sensors that can be embedded into everything from tools to circuitry materials, hyper-advanced location sensors, and a software or Web service that would tie all of these things together into a usable package. The other aspects, such as tablets and small cameras exist today. Hopefully within 10 years or so, our system would be easy to create with cheaper tablets and cameras and a better, more integrated system of sensors.
Refined Concept Storyboards
We have created a new storyboard that shows what we propose as the basic flow of our demo.
Our demo involves two users. One, Julio, works mainly on modifying clothes by doing leatherwork or silkscreening. The other, Vicky, is an expert at using Arduino. The story starts with Vicky using one of the tablets to create a modification of a past project that she finds in the project gallery. Once she starts deviating from the provided tutorial on the tablet, she starts the automatic recording feature of the system. Once Vicky finishes her project, she replaces the tablet.
Julio is interested in integrating LED lights into a shirt for a party, but has little Arduino experience. He brings a tablet over to the workbench where Vicky was recently working and sees her project, which is very similar to what he wants to do.
At the end of this milestone, our group is ready to implement our demo plan showing a complete and detailed idea for a system combining situated tablet displays and advanced documentation mechanisms. This system is designed to help the members of All Hands Active in several important ways. By using the experience prototyping process, our group was able to test the effects of different system proactivity levels, test the usefulness of the system in different activity situations, and test the effectiveness of different form factors for the system. We settled on a system that combines ideas from all across the “speed dating” matrix. From the large display we take the idea of galleries of projects and members. From the small display we take the idea of “situatedness” and location sensitivity. By testing proactivity we settled on a system that allows the user to initiate different levels of proactivity as their needs change, and discovered that users are fine with the tagging of information about tools and materials, but are less interested in tagging of personal information. We also confirmed the power of sensing context and user intention, and integrated these ideas into our new prototype.
As we move forward, the main question that we leave unanswered is the usability of our new idea. While we did test many variations, and came up with an amalgamation of all our ideas to demo, we did not actually get the chance to test our new, tablet-based system. The documentation aspect of our system is unchanged, so that aspect should continue to be successful, but it is possible that tablets would not be as successful as either the large displays or the small displays that we did test.