Much if not all of the focus on augmented or mixed reality involves a screen. A tablet, a phone, or a screen within a headset. So let’s talk about screenless augmented reality.
As I wrote about in part 1 of this post, you could have a bracelet that sends you messages though text or light. You might argue that this is just “messaging” and not augmented reality. But it’s not messaging about the IRL reality. It’s a messaging layer on top of reality. It’s adding instruction and rules to augment your reality. (And your augmented parts might be different than someone else’s).
Audio-Directed Instruction
Two people sit at a table and have a conversation with each other that is partially prompted or fully orchestrated by a one-eared headset. It’s a conversation they are having but it’s not their own. They are instructed to make gestures and show certain facial expressions. They are instructed to pick up objects on the table. But all the while, they are hearing two actors voices say the lines they are acting out and they are being instructed to make certain gestures. There is no screen. The augmentation comes from the earpiece.
Let’s take the example a step further and propose something that might be called “intersecting screenless augmented reality”.
In this example, both people are hearing different things and interpreting the situation completely differently. One person is experiencing a conversation about dogs. The other is experiencing a conversation about death. This idea can play off the fact that expressions and gestures can mean different things in different contexts.
You can take this even a step further and create a third intersecting experience. People across the room see these two interacting, making gestures and they are told yet another story about what is happening. Obviously writing three stories in this manner would be tricky to pull off as convincing to everyone. But the stories might not have to be starkly different. Instead, they could leave out certain details. Which is exactly how things happen in the real world where things are misheard, missed and misinterpreted all the time. I think with a medium use such as this there’s a lot of artistic territory that could be used to create opportunities to show perspectives and empathy.
Beat Directed Interaction
One subset of Audio-Directed Interaction is beat directed. This method can capitalize on the ability and rewards of synchronization. It’s not quite music and dancing, but it’s more structured than just giving instructions. Individuals can be put on the same beat, either through a metronome type device or a countdown. This way they will “hit” on the same synchronized beat. What they do could be a gesture, a word or phrase or a full body movement, like walking somewhere in a room or clapping.
In a group, people could be put into subsets in either a static of fluid way. They could be challenged to find their group, or it could be setup as to where the group is established and clear before the participant is given the cue for participation (where they get the “a ha” moment combined with a synchronization reward and also the feeling of belongingness).
Audio-Directed Rules
Rather than treat people as automatons performing a play, you can instead give them rules and a goal so that they have more agency in their experience.
For example, have a group of people with earpieces. Two people hear a message to find “their” people using only eye contact and are instructed to stand by each other. In this way, people aren’t told exactly what to do, they must learn the communication methods via the earpiece. Will they just start blinking at people? Will they stare? How will the in-group respond? How will the out-group respond?
There’s a lot more to explore here, these are just a few ideas.