Daniel Braunstein

Audio Programming | Spatial Audio Research

Digital Anthropology and The Absence of "Place"

This past week, I had the pleasure of hearing Wade Wallerstein, Bay-Area-based Digital Anthropologist and Founder/Director of Silicon Valet, speak at one of my classes. To be honest, I had no concept of what digital anthropology was, and a quick hour-long deep dive into some extremely dense theory and a rapid-fire overview of important observations eventually coalesced into a few important takeaways:

  • Digital interactions are Real. It is reality, experienced through a digital lens.

  • The collective meaning-making we do in the real world, turning “spaces” into “places”, exists just as strongly, yet entirely uniquely, in the digital domain.

Maybe the thing that hasn’t quite sat right with me about all these Zoom calls over the last year is nestled alongside this claim: The potential for “reality” in digital interactions, when homogenized and sterilzed, only then becomes a poor attempt at human communication.

Touching back on Digital Anthropology, for a moment, I want to tease at some of the ways in which I’ve started to reconsider how I/we all engage online. There are a whole bunch of terms and concepts that have been coined over the last 30 years, and it’s a bit of a highlight reel of things to consider: How does the literal physical interface of the computer affect our interaction? How do we engage with our bodies’ “algorithms” (say, touching your hair when you get nervous) in a mediated space? Can a digital experience, by virtue of being played through some unique combination of hardware, software, and environment, ever be truly the same? Has the seamlessness fluidity of modern social media platforms totally squashed away any sense of “community”?

As a slight counter to the fatalist “all community is dead”, it does show up in various ways: Reddit, fandoms, discord, twitch communities - all somewhat-personalized ways for folks to gather and experience. If that sounds interesting, here’s a bit of a deep dive into one particular corner: Musicologist George Reid speaking brilliantly on some of the ways in which “fandom”, nostalgia, and music overlap in the identity-and-community-making of the chiptune music scene in this interview

While these are all good questions - what’s the tie-in? At least for me, this seems to fit naturally into the work I’m doing for my thesis at NYU. Digital communication is currently stunted, impersonal, and simply difficult to navigate. I’ve sat for dozens of zoom meetings, and yet Zoom doesn’t feel like a “Place”. It’s the same window into the same emotionless, awkward-conversation-prone consumption of sound, only for people I’d otherwise be forging lifelong bonds with (and myself) to hit the “Log Out” button exactly at the top of the hour. So what’s the remedy?

This room easily defined my first few years of undergrad. Image from.. 2014?

This room easily defined my first few years of undergrad. Image from.. 2014?

I don’t have a perfect answer, but maybe a potential attempt. I recently met (via Zoom) with some fellow alumni from my music fraternity back at Michigan, Phi Mu Alpha. We used to run a student cafe during my first few years, but it got axed during a building renovation before we graduated. As we were chatting, it hit me: That is place. It’s almost perfect: the literal space doesn’t even exist anymore, yet, as we all sat there reminiscing on zoom, memories of “The Lounge” brought us right back into a sense of togetherness that felt like it had been gone for nearly 6 years. I realized “what if the missing ingredient is Place”?

So, using my newfound skills in Blender, I set to work. There’s still much to be done, but maybe a virtual hang in a spot so filled with memory will be the extra “oomph” we’ve been missing lately. In a world of sterilized, one-size-fits-all communications and social media platforms, what we’re missing is what the early internet had in spades: AIM away messages. Myspace pages. Some little corner of the internet where people got to be themselves, and in doing so, carved out a genuine place in that vast ocean of bits.

It’s certainly a start.

It’s certainly a start.

Also texturing is a thing! Wow it’s hard.

Also texturing is a thing! Wow it’s hard.

New Skill Acquired: Blender!

I’m going to keep this one short and sweet! After doing some awesome tutorials courtesy of The Blender Guru , I’ve finally given 3D modeling a real shot! The tutorials were an awesome overview of the tool, and provided a great first step in.

The proof is in the pudding! …       well, icing.

The proof is in the pudding! … well, icing.

Andrew Price, who runs the YT channel, approached it from the “80/20” perspective: 80% of the time, you’ll only need 20% of the tools in a given software. So, true to form, in a few short hours of listening to his instructions I have made my first render - this little doughnut!

Afterwards, I thought a great next step (and litmus test of whether I actually learned anything) would be to try and create something “from scratch” (at least, in the 3D domain without tutorials.)

A few hours later, I’m pretty proud of the results for the third-ish object I’ve ever made in this software.

Overwatch’s “Pachimari” character, this version affectionately named “Piggymari”

Overwatch’s “Pachimari” character, this version affectionately named “Piggymari”


One last bit - in the process of looking up which “Pachimari” to do. (I knew that simple smooth textures were going to be way easier than say, the mummy one), I stumbled upon a new vocabulary word / concept: Yuru-Chara.

Long story short, that’s the Japanese term for the cute little brand mascots which are made to be approachable and relaxing. I rather like the idea, we could all use a little more cute!

Avatar Selection Choices - Reality vs Abstraction

Now that I’m starting to interact more socially in VR - at least twice weekly - finding a ‘suitable’ avatar to represent me in these digital worlds has become an important task. Whether it’s for personal, creative, or professional consistency, we’ve all seen the persistence of “avatar identity” through famous personas: deadmau5, daft punk, and even Lady Gaga, to some extent. As socializing virtually stands poised to become the dominant thing - in a moment where the whole world has gone online, what’s stopping it from really taking off? It might have something to do with never feeling like there’s quite enough “us” in how we’re able to interact.

How does visual representation play into this? For one, a few weeks in and I was a little tired of being ‘Daniel the Boxy Weird Red Panda”. After pouring over a couple of options, I really resonated with one of the available tools, by 3D software company Wolf3D: “Ready Player Me” avatar generation. You can try it yourself here, or their fullbody version. What drew me in was one of their blog posts: “Finding the right balance between realistic and abstract 3D avatars”.

To put it simply, if we make our representations too abstract or untethered, the avatar risks being meaningless. Too close to ‘real’ and the Uncanny Valley happens, so we have to try and find the “Goldilocks Zone” of digital representation. Finding a balance between digestible and fun and “accurately representational”, as far as identity goes, is one of the biggest remaining challenges in preventing VR from being more than a passing tech fad.

The Good

ReadyPlayerMe’s Avatar Creation

ReadyPlayerMe’s Avatar Creation

So here’s what I came up with!

Now I don’t believe it to be perfect, and the hair’s about 6 inches closer to “Aragorn” than what I’m currently working with, but I think it’s pretty close! It gets the beard, my new square glasses, the sort of messy long vibe I’ve been working with, enough that I feel like if someone met me in VR they’d have an easy enough time recognizing me in real life. It also seems, by virtue of facial expression, pretty inviting. And the hoodie/cardigan felt close to how I’ve been showing up for most Zoom meetings lately.

The fullbody version of their avatar creator actually has a much closer hairdo, but then missed the mark on the beard (why are they different?) and clothing styles and misses for me on body proportions too. I can’t make any claim of ‘easiness’ in tech, but even a selection between a few body types would feel nice. I’m not a fortnite-hero slim dude, and that’s perfectly fine with me! Being forced to represent one in full-body isn’t the most comfortable choice, however.

Clothing is also such an important part of making first impressions (I mean, it’s why we dress up for interviews, right?), and I felt really limited by the options here. There didn’t seem to be any variation of non-business-dress that didn’t have some substantial level of quirk, leaving me to choose between my least-least-favorite two, “I guess this is seasonally appropriate wolfdeer sweater” and “maybe I’ll stream some beatsaber in this cybercoat”, neither of which quite feel on the mark enough for, say, interacting with friends casually.

You know it’s ok to be a 6’ dude who weighs more than 135 pounds, right?ReadyPlayerMe’s Fullbody Avatar Creator

You know it’s ok to be a 6’ dude who weighs more than 135 pounds, right?

ReadyPlayerMe’s Fullbody Avatar Creator

Time to exploit my employees for months on end of endless crunch. After I condition, of course. ReadyPlayerMe’s Fullbody Avatar Creator

Time to exploit my employees for months on end of endless crunch. After I condition, of course.

ReadyPlayerMe’s Fullbody Avatar Creator

The Meh

This me just looks so sinister.AltspaceVR Avatar Editor

This me just looks so sinister.

AltspaceVR Avatar Editor

I do, however, want to contrast it against what I see as slightly on the abstract side of the abstract-real continuum, but still in the same neighborhood, and that was my AltspaceVR Avatar. Coincidentally, this is how I met many folks this year for the first time at GameSoundCon, which was a trippy but fun experience, but I feel like it captures (largely through poor facial-hair and clothing options) the “me” experience fairly poorly.

Now a lot of the features are pretty similar (at least within the confines of each application’s possibilities) but I still think the style choices create an entirely different timbre of presentation. The “ReadyPlayerMe” avatar seems infinitely more friendly and inviting. I wish I knew enough visual language to articulate what I feel like causes this, but man if it isn’t fascinating.

And the Unfortunate

As for the ‘failed experiments’, here’s both some ‘too abstract’ and ‘uncanny valley’ technologies, my apologies to both Default Mozilla Bots and the AvatarSDK. I really respect the approach of having such a nondescript avatar as the default. Not limiting participation in virtual spaces to defaulting to realistic self-representation can be an awesomely safe feature for users!

These lil robos aren’t bad! But they’re also not… me.Mozilla Hubs ‘Default Avatar’  Options

These lil robos aren’t bad! But they’re also not… me.

Mozilla Hubs ‘Default Avatar’ Options

I chose an unfortunate screenshot but I promise it’s even more unsettling in motion. From AvatarSDK Face2.0 Demo Video

I chose an unfortunate screenshot but I promise it’s even more unsettling in motion.

From AvatarSDK Face2.0 Demo Video

That being said, I believe that if we want there to be any longevity to VR beyond a 'cool tech craze’, learning how to identify and represent ourselves comfortably is pretty important.

Papers Worth Reading: "Shaping Pro-Social Interaction in VR"

Authors: Joshua McVeigh-Schultz, Anya Kolesnichenko, Katherine Isbister
Available At: https://dl.acm.org/doi/10.1145/3290605.3300794


We seek to elucidate the constellation of design choices that shape pro-social interactions in commercial social VR… to study the relationship between design choices and social practices… [and] clarify the stakes of these choices.

My first reaction reading this paper was a sigh of relief. This is, in and of itself, not another study on some microcosm of interaction touting a VR feature’s capacity to make users 37% more likely to want to continue the experience, but rather a significantly pulled-back look on the features designers have chosen to implement, and the correlation with their creator’s varied philosophies on human interaction, agency, and responsibility. Of course, I poke some fun at myself - my thesis proposal having just been accepted under the working title Towards Increased Telepresence in Co-Located Extended-Reality Experiences - but also feel validation in the core of my thesis’ pursuit to uncover the effect audio has in a human-connection-centered, rather than the more common attention-centered, line of questioning.

Having recently attended the GameSoundCon 2020 conference, of which many social events were held virtually, this was a question that was building off of a great deal of important conversations I’ve had with friends and colleagues about their relative lack of safety in environments where “networking” and “socialization” are also so often conflated with copious alcohol and plenty of extreme social/career power dynamic differences. I found myself thinking:

How does the virtual space affect how we interact? How are our personal boundaries boundaries codified, respected, or enabled to be violated?

I personally remember experiencing a profound weirdness in unintentionally walking “through” many people at times, and found both navigating and interpreting the less-tangible relationship of space and body language extremely difficult. McVeigh-Shultz, Kolesnichenko, and Isbister, through the interviews they conducted, answer many of those questions in a quite meaningful way.

“this [auditorium] didn’t … have the seating… it used to be a madhouse…. Once we put the seating in,… they [understood] that in real life you would sit down and be quiet”.
- Tamara Hughes, Community Support Coordinator, “Rec Room”

The impact the space itself - and the societal expectations of such - seems to be a through-line across all developers. Sports-themed space Rec Room had to eliminate a Locker Room-style area for the ensuing “locker room talk”, while AltspaceVR’s inclusion of burgers, marshmallows, and firecrackers around a campfire (itself chosen for the underlying space-experience of storytelling and intimate memory-making) made users all the more comfortable to engage in these virtualized spaces almost ritualistically. This is a fascinating (if understandable) concept, that to me presents an incredible challenge for level designers in the near future. Once we’ve moved past the “you can do stuff kinda like real life here too!” phase, how will these lessons impact more fantastical or abstract experiences? The potential for exploration, combination, and subversion is nearly limitless: Could you have an Escherian lounge space that despite the initial visual disarray promotes peace and relaxation? The architectural / spatial / location vocabulary is rich with subtext that’s only just beginning to be uncovered.

One particularly interesting observation made is that of the replacement and/or creation of new Gestures to replace old ones. AltspaceVR allows users to generate a small cloud of one of a handful of emojis, which I can anecdotally confirm becomes a great way of communicating emotion in larger group environments - say something really nice, everyone throws up a heart emoji - everyone feels great! And one intereviewed VRChat content creator talked about the onset of gestures like "head patting” or “feeding” as a replacement for hugs. While this is a whole different topic, it’s worth taking a moment to recognize the ways in which specific communities engage with each other: VRChat has a noted presence of users who identify as - and create avatars that reflect this identification - as “furries”, and the freedom (or lack thereof) in avatar selection inside respective communities can lead to a growing vocabulary of virtual body language. “Petting” is a gesture that likely carries different meanings to canine-styled avatars than, say, robots.

A VRChat user with a ‘Furry Avatar’ posing in a recreation of 2020’s most spectacular location mix-up

A VRChat user with a ‘Furry Avatar’ posing in a recreation of 2020’s most spectacular location mix-up

“The fact that the community is empowered to kind of just make space their own, has mean we’ve been able to lean on a few community members that serve as ambassadors as well.”
- Ishita Kapur, Senior Product Manager, AltspaceVR

Deciding here to ultimately lay the burden of responsibility of things like moderation and permission is, in and of itself, an explicit decision with great ramifications. As we’ve seen play out dozens of times in the public eye, social networks and platforms often grapple with this, and make conscious decisions based on an (uneven) mixture of business interests and the intended ethics of the platform. A complete lack of repercussions and ‘do what you want’-style approach can lead to 4chan (or 8chan)-style de-evolution as “the place where those who’ve been kicked out of every other space meet”, and even enforcing pre-existing guidelines can become blurry when trying to balance an appearance of fairness, as we’ve most seen in the backlash Twitter, AWS, Apple, and Google Received for no longer supporting certain politicians and/or services that repeatedly broke the TOS.

I referenced Kapur’s quote both for it’s truth as both a high-positive potential social decision and savvy business move, but also caution against the part that was left unsaid which is most likely something along the lines of “but we’ll kick you out if you do things we disagree with”. Leaning on individuals as ‘brand ambassadors’ can be a tremendous way to bring people into the social space, and cultivate community, but it left unchecked it can be quite a dangerous force. As one non-VR example: when Games Workshop posted a statement on inclusivity, it sparked a noted negative reaction from certain “community leaders”, who’s angrily broadcasted message to their >250k followers left a number of folks feeling significantly more excluded than before the official statement was released. While GW forced said figure to remove part of their brand name from his social media, the damage had already been done to the community members. The community-leader centric approach can be a wonderful way to onboard new folks and create ambassadors for your environments, but there’s a certain risk of mutiny if there isn’t consistent awareness of what content is being moderated on your platform.

One challenge of blocking actions that require the victim orient [themselves] to the offender is that harassers often attempt to game this mechanic by escaping quickly.

Lastly, it’s worth mentioning the bevy of reporting strategies that various platforms offer. Blocking, muting, “personal space bubbles”, and more are some of the present options that users have to ‘protect themselves’ in these virtual environments. In an attempt to address the issue in the above quote, many platforms have implemented some form of ‘recent interactions’ list so users don’t have to have their boundaries exclusively defensible through in-environment navigation and targeting, which is a great step! I want to turn, however, a critical eye towards one disparity between the two categories McVeigh-Schultz et al. deliniate: open ‘social VR’ environments, and “private (safe) environments”. I’m somewhat critical of this distinction (in the eyes of the designers, not McVeigh-Schultz et al.), especially as there’s a trend towards increased security features in the former.

CW/TW: Indirect references to SA/SV

It’s a well known statistic that the vast majority of sexual assaults are perpetrated by those known to the victim. This number ranges from 76% in all cases to 93% of cases with juvenile victims. Without belaboring the point, safety features need to be a priority and consideration even in “private environments”. While the potential for bad actors to come from nowhere in an open meeting space is certainly present, treating the potential for virtual-violation among known users in ‘invite-only trusted spaces’ (such as conferences, as I previously mentioned) as a negligible concern is naive at best, and harmful design practice at worst. I don’t mean to levy accusations against one particular company, but rather warn against the danger of assuming that encountering a ‘troll’ in an anonymized space is the greatest threat to personal comfort.


Sorry for the long read, but I think the core of the observations made by Joshua McVeigh-Schultz, Anya Kolesnichenko, and Katherine Isbister were fascinating in their breadth and observational poignancy. Check out their paper and don’t just give my thoughts a read! If you’re interested in digging more into some of the concepts, here’s some quotes (and their referenced papers) that I’m planning on looking further into:

  1. “Social VR can magnify conflict or harassment [27, 35], underscoring the importance of designing for social safety in shared immersive environments”

    1. [27] “My first virtual reality groping”

    2. [35] “All are welcome: Using VR ethnography to explore harassment behavior in immersive social virtual reality”

  2. “These include a comprehensive set of guidelines for usability and playability in VR [11]”

    1. [11] “Are Game Design and User Research Guidelines Specific to VR Effective in Creating a More Optimal Player Experience? Yes, VR PLAY”

  3. “The Connections that interview respondents made between environmental cues and social expectations also resonate with longstanding interests within HCI concerning the relationship between space (as a designed medium) and place (as the social fabric [13, 15].”

    1. [13] ”Re-space-ing Place: “Place” and “Space”, ten years on (2006)

    2. [15] “Re-space-ing Space: The Roles of Place and Space in Collaborative Systems (1996)

  4. (For those interested in Interview analysis techniques) “WE utilized a semi-automated transcription and video annotation tool (Temi)… for analysis we took cues from Saldaña’s approach to qualitative coding [32].

    1. [32] “The Coding Manual for Qualitative Researchers”

Designing a Virtual Environment in Spoke and Blender-Render-Fender-Benders

For my “Social VR” course at NYU, this week we were required to design a simple environment using Mozilla’s web-based 3D scene builder. Overall, it think it’s a pretty handy tool for how much accessibility it affords, and the integration with various asset sources makes it fairly simple to pick up!

When thinking of an environment to get started on, as a big fan of Tabletop RPG’s the now-trite beginning of many a years of play has usually begun with some variation on “So all of your characters meet in this Tavern…”, and I thought, why not start this ‘Adventure’ in one! To avoid burying the lede, here’s a snippet of what I was able to dream up:

You can practically hear the farmer who’s having a Giant Rat problem already…

You can practically hear the farmer who’s having a Giant Rat problem already…

The fun part of this, however, isn’t in the finished product but rather in what would be my first foray into learning Blender, and grabbing assets from SketchFab that weren’t quite right. Firstly, I need to properly credit the artist François Espagnet for his wonderfully stylish assets.

However, when I first went to import those nice windowed-wall sections, there seemed to be a glaring error in the context of my cozy nighttime scene:

Bright_Windows.png

Something doesn’t exactly scream “nighttime” here

So I figured “This is a great time to learn about how to change materials in a 3d modelling software. Or maybe it’s textures. I don’t quite know the name yet.” and opened up the asset to try and figure out what I was doing.

After some great “How to make glass materials in Blender” tutorials (so it was materials), my object looked great in the Render Preview screen, wait, viewport, I exported it and dragged it into Spoke, only to find my beautifully transparent windows now part of a colorless gray slab. And after a few hours, I was able to figure it out, with beautiful transparency. Mission Accomplished.

A little Dark, but voila! It works!

A little Dark, but voila! It works!

As for the Technical How, let’s talk details for those interested.

So in order make exporting / rendering / uploading one smooth process, one of the increasingly-adapted formats is that of the GL Transmission Format (glTF) and it’s binary partner, (GLB). Developed by The Khronos Group, the encode all of the relevant object info in a neat little .JSON package, capable of storing textures, geometry, animation and more, which is incredible.

That being said, the Blender-Render(er) isn’t entirely perfect and hasn’t caught up entirely to the featureset that Blender itself provides. While most of the aforementioned tutorials recommended using a Principled BSDF shader node and tweaking the Transmission all the way up and tweaking other stuff to taste, the current Blender Exporter breaks with certain parameters, one being 'Specularity’. After considerable time, ~40 export attempts, and some forum digging, I found the way to get some sloppy transparency working. Instructions as follows:

It’s not quite as beautiful as a great '‘glass’ material, but it gets the job done and let’s light through! There you have it. I’m looking forward to learning how to do it all properly, but for now I’m glad I got it done

Process Blog? What - and why - is that?

Perfectionism sucks, plain and simple. And the rest of this infrequently-updated website might lead you to believe that I haven’t been up to much. But here’s the kicker: I totally have, and I’m tired of letting myself get in the way of sharing that.

This blog is one-part class assignment, and three-parts me being sick of hiding behind imperfection as an excuse to never put anything out there, so here goes nothing.

Read More