Shared Dream Progress Update #2 - The Shaders are Working!
Detailing my progress looking into shaders, and how I was able to achieve selective scene rendering through some Unity Canvas trickery and a little creativity!
Read MoreDetailing my progress looking into shaders, and how I was able to achieve selective scene rendering through some Unity Canvas trickery and a little creativity!
Read MoreMeeting with my professor to talk about my project idea, we discussed elements of ‘perspective’ and time, helping me solidify my concept: peeking through fragmented ‘windows’ of different points in time, experiencing the way memories can disjointedly appear within the walls of your own home.
Read MoreHaving decided on my final project for my Social VR class at NYU, I begin the daunting task of looking into how to program with shaders.
Read MoreFor all of the debate about whether or not “Virtual” interactions can ever replace “Real” ones, here’s one not-as-often-discussed side of the story: What does it feel like when you miss out?
Read MoreIn Social VR, Avatars offer freedom of expression in many ways. However, participating in voice chat reveals a user's real voice, which can have detrimental effects on the experience, especially for those with marginalized identities. We look further into this phenomenon, and also towards potential solutions.
Read MoreBeing able to interact in a world that’s vague, hazy, and exiting all at once seems to capture the very essence of dreams. Through social VR, we not only have the potential create dreams - in all of their disjointed, poetic nonsense - but we have the opportunity to invite others into them.
Read MoreHow much of “ourselves” is present in digital interactions? Does it matter?
I look at a few ways in which I’ve interacted ‘virtually’ this past week, and think about how some methods of interaction still manage to feel social despite a prominent lack of “me”.
Read MoreNow that I’m starting to interact more socially in VR - at least twice weekly - finding a ‘suitable’ avatar to represent me in these digital worlds has become an important task. Whether it’s for personal, creative, or professional consistency, we’ve all seen the persistence of “avatar identity” through famous personas: deadmau5, daft punk, and even Lady Gaga, to some extent. As socializing virtually stands poised to become the dominant thing - in a moment where the whole world has gone online, what’s stopping it from really taking off? It might have something to do with never feeling like there’s quite enough “us” in how we’re able to interact.
How does visual representation play into this? For one, a few weeks in and I was a little tired of being ‘Daniel the Boxy Weird Red Panda”. After pouring over a couple of options, I really resonated with one of the available tools, by 3D software company Wolf3D: “Ready Player Me” avatar generation. You can try it yourself here, or their fullbody version. What drew me in was one of their blog posts: “Finding the right balance between realistic and abstract 3D avatars”.
To put it simply, if we make our representations too abstract or untethered, the avatar risks being meaningless. Too close to ‘real’ and the Uncanny Valley happens, so we have to try and find the “Goldilocks Zone” of digital representation. Finding a balance between digestible and fun and “accurately representational”, as far as identity goes, is one of the biggest remaining challenges in preventing VR from being more than a passing tech fad.
ReadyPlayerMe’s Avatar Creation
So here’s what I came up with!
Now I don’t believe it to be perfect, and the hair’s about 6 inches closer to “Aragorn” than what I’m currently working with, but I think it’s pretty close! It gets the beard, my new square glasses, the sort of messy long vibe I’ve been working with, enough that I feel like if someone met me in VR they’d have an easy enough time recognizing me in real life. It also seems, by virtue of facial expression, pretty inviting. And the hoodie/cardigan felt close to how I’ve been showing up for most Zoom meetings lately.
The fullbody version of their avatar creator actually has a much closer hairdo, but then missed the mark on the beard (why are they different?) and clothing styles and misses for me on body proportions too. I can’t make any claim of ‘easiness’ in tech, but even a selection between a few body types would feel nice. I’m not a fortnite-hero slim dude, and that’s perfectly fine with me! Being forced to represent one in full-body isn’t the most comfortable choice, however.
Clothing is also such an important part of making first impressions (I mean, it’s why we dress up for interviews, right?), and I felt really limited by the options here. There didn’t seem to be any variation of non-business-dress that didn’t have some substantial level of quirk, leaving me to choose between my least-least-favorite two, “I guess this is seasonally appropriate wolfdeer sweater” and “maybe I’ll stream some beatsaber in this cybercoat”, neither of which quite feel on the mark enough for, say, interacting with friends casually.
You know it’s ok to be a 6’ dude who weighs more than 135 pounds, right?
ReadyPlayerMe’s Fullbody Avatar Creator
Time to exploit my employees for months on end of endless crunch. After I condition, of course.
ReadyPlayerMe’s Fullbody Avatar Creator
This me just looks so sinister.
AltspaceVR Avatar Editor
I do, however, want to contrast it against what I see as slightly on the abstract side of the abstract-real continuum, but still in the same neighborhood, and that was my AltspaceVR Avatar. Coincidentally, this is how I met many folks this year for the first time at GameSoundCon, which was a trippy but fun experience, but I feel like it captures (largely through poor facial-hair and clothing options) the “me” experience fairly poorly.
Now a lot of the features are pretty similar (at least within the confines of each application’s possibilities) but I still think the style choices create an entirely different timbre of presentation. The “ReadyPlayerMe” avatar seems infinitely more friendly and inviting. I wish I knew enough visual language to articulate what I feel like causes this, but man if it isn’t fascinating.
As for the ‘failed experiments’, here’s both some ‘too abstract’ and ‘uncanny valley’ technologies, my apologies to both Default Mozilla Bots and the AvatarSDK. I really respect the approach of having such a nondescript avatar as the default. Not limiting participation in virtual spaces to defaulting to realistic self-representation can be an awesomely safe feature for users!
These lil robos aren’t bad! But they’re also not… me.
Mozilla Hubs ‘Default Avatar’ Options
I chose an unfortunate screenshot but I promise it’s even more unsettling in motion.
From AvatarSDK Face2.0 Demo Video
That being said, I believe that if we want there to be any longevity to VR beyond a 'cool tech craze’, learning how to identify and represent ourselves comfortably is pretty important.
Authors: Joshua McVeigh-Schultz, Anya Kolesnichenko, Katherine Isbister
Available At: https://dl.acm.org/doi/10.1145/3290605.3300794
We seek to elucidate the constellation of design choices that shape pro-social interactions in commercial social VR… to study the relationship between design choices and social practices… [and] clarify the stakes of these choices.
My first reaction reading this paper was a sigh of relief. This is, in and of itself, not another study on some microcosm of interaction touting a VR feature’s capacity to make users 37% more likely to want to continue the experience, but rather a significantly pulled-back look on the features designers have chosen to implement, and the correlation with their creator’s varied philosophies on human interaction, agency, and responsibility. Of course, I poke some fun at myself - my thesis proposal having just been accepted under the working title Towards Increased Telepresence in Co-Located Extended-Reality Experiences - but also feel validation in the core of my thesis’ pursuit to uncover the effect audio has in a human-connection-centered, rather than the more common attention-centered, line of questioning.
Having recently attended the GameSoundCon 2020 conference, of which many social events were held virtually, this was a question that was building off of a great deal of important conversations I’ve had with friends and colleagues about their relative lack of safety in environments where “networking” and “socialization” are also so often conflated with copious alcohol and plenty of extreme social/career power dynamic differences. I found myself thinking:
How does the virtual space affect how we interact? How are our personal boundaries boundaries codified, respected, or enabled to be violated?
I personally remember experiencing a profound weirdness in unintentionally walking “through” many people at times, and found both navigating and interpreting the less-tangible relationship of space and body language extremely difficult. McVeigh-Shultz, Kolesnichenko, and Isbister, through the interviews they conducted, answer many of those questions in a quite meaningful way.
“this [auditorium] didn’t … have the seating… it used to be a madhouse…. Once we put the seating in,… they [understood] that in real life you would sit down and be quiet”.
- Tamara Hughes, Community Support Coordinator, “Rec Room”
The impact the space itself - and the societal expectations of such - seems to be a through-line across all developers. Sports-themed space Rec Room had to eliminate a Locker Room-style area for the ensuing “locker room talk”, while AltspaceVR’s inclusion of burgers, marshmallows, and firecrackers around a campfire (itself chosen for the underlying space-experience of storytelling and intimate memory-making) made users all the more comfortable to engage in these virtualized spaces almost ritualistically. This is a fascinating (if understandable) concept, that to me presents an incredible challenge for level designers in the near future. Once we’ve moved past the “you can do stuff kinda like real life here too!” phase, how will these lessons impact more fantastical or abstract experiences? The potential for exploration, combination, and subversion is nearly limitless: Could you have an Escherian lounge space that despite the initial visual disarray promotes peace and relaxation? The architectural / spatial / location vocabulary is rich with subtext that’s only just beginning to be uncovered.
One particularly interesting observation made is that of the replacement and/or creation of new Gestures to replace old ones. AltspaceVR allows users to generate a small cloud of one of a handful of emojis, which I can anecdotally confirm becomes a great way of communicating emotion in larger group environments - say something really nice, everyone throws up a heart emoji - everyone feels great! And one intereviewed VRChat content creator talked about the onset of gestures like "head patting” or “feeding” as a replacement for hugs. While this is a whole different topic, it’s worth taking a moment to recognize the ways in which specific communities engage with each other: VRChat has a noted presence of users who identify as - and create avatars that reflect this identification - as “furries”, and the freedom (or lack thereof) in avatar selection inside respective communities can lead to a growing vocabulary of virtual body language. “Petting” is a gesture that likely carries different meanings to canine-styled avatars than, say, robots.
“The fact that the community is empowered to kind of just make space their own, has mean we’ve been able to lean on a few community members that serve as ambassadors as well.”
- Ishita Kapur, Senior Product Manager, AltspaceVR
Deciding here to ultimately lay the burden of responsibility of things like moderation and permission is, in and of itself, an explicit decision with great ramifications. As we’ve seen play out dozens of times in the public eye, social networks and platforms often grapple with this, and make conscious decisions based on an (uneven) mixture of business interests and the intended ethics of the platform. A complete lack of repercussions and ‘do what you want’-style approach can lead to 4chan (or 8chan)-style de-evolution as “the place where those who’ve been kicked out of every other space meet”, and even enforcing pre-existing guidelines can become blurry when trying to balance an appearance of fairness, as we’ve most seen in the backlash Twitter, AWS, Apple, and Google Received for no longer supporting certain politicians and/or services that repeatedly broke the TOS.
I referenced Kapur’s quote both for it’s truth as both a high-positive potential social decision and savvy business move, but also caution against the part that was left unsaid which is most likely something along the lines of “but we’ll kick you out if you do things we disagree with”. Leaning on individuals as ‘brand ambassadors’ can be a tremendous way to bring people into the social space, and cultivate community, but it left unchecked it can be quite a dangerous force. As one non-VR example: when Games Workshop posted a statement on inclusivity, it sparked a noted negative reaction from certain “community leaders”, who’s angrily broadcasted message to their >250k followers left a number of folks feeling significantly more excluded than before the official statement was released. While GW forced said figure to remove part of their brand name from his social media, the damage had already been done to the community members. The community-leader centric approach can be a wonderful way to onboard new folks and create ambassadors for your environments, but there’s a certain risk of mutiny if there isn’t consistent awareness of what content is being moderated on your platform.
One challenge of blocking actions that require the victim orient [themselves] to the offender is that harassers often attempt to game this mechanic by escaping quickly.
Lastly, it’s worth mentioning the bevy of reporting strategies that various platforms offer. Blocking, muting, “personal space bubbles”, and more are some of the present options that users have to ‘protect themselves’ in these virtual environments. In an attempt to address the issue in the above quote, many platforms have implemented some form of ‘recent interactions’ list so users don’t have to have their boundaries exclusively defensible through in-environment navigation and targeting, which is a great step! I want to turn, however, a critical eye towards one disparity between the two categories McVeigh-Schultz et al. deliniate: open ‘social VR’ environments, and “private (safe) environments”. I’m somewhat critical of this distinction (in the eyes of the designers, not McVeigh-Schultz et al.), especially as there’s a trend towards increased security features in the former.
CW/TW: Indirect references to SA/SV
It’s a well known statistic that the vast majority of sexual assaults are perpetrated by those known to the victim. This number ranges from 76% in all cases to 93% of cases with juvenile victims. Without belaboring the point, safety features need to be a priority and consideration even in “private environments”. While the potential for bad actors to come from nowhere in an open meeting space is certainly present, treating the potential for virtual-violation among known users in ‘invite-only trusted spaces’ (such as conferences, as I previously mentioned) as a negligible concern is naive at best, and harmful design practice at worst. I don’t mean to levy accusations against one particular company, but rather warn against the danger of assuming that encountering a ‘troll’ in an anonymized space is the greatest threat to personal comfort.
Sorry for the long read, but I think the core of the observations made by Joshua McVeigh-Schultz, Anya Kolesnichenko, and Katherine Isbister were fascinating in their breadth and observational poignancy. Check out their paper and don’t just give my thoughts a read! If you’re interested in digging more into some of the concepts, here’s some quotes (and their referenced papers) that I’m planning on looking further into:
“Social VR can magnify conflict or harassment [27, 35], underscoring the importance of designing for social safety in shared immersive environments”
“These include a comprehensive set of guidelines for usability and playability in VR [11]”
“The Connections that interview respondents made between environmental cues and social expectations also resonate with longstanding interests within HCI concerning the relationship between space (as a designed medium) and place (as the social fabric [13, 15].”
(For those interested in Interview analysis techniques) “WE utilized a semi-automated transcription and video annotation tool (Temi)… for analysis we took cues from Saldaña’s approach to qualitative coding [32].
For my “Social VR” course at NYU, this week we were required to design a simple environment using Mozilla’s web-based 3D scene builder. Overall, it think it’s a pretty handy tool for how much accessibility it affords, and the integration with various asset sources makes it fairly simple to pick up!
When thinking of an environment to get started on, as a big fan of Tabletop RPG’s the now-trite beginning of many a years of play has usually begun with some variation on “So all of your characters meet in this Tavern…”, and I thought, why not start this ‘Adventure’ in one! To avoid burying the lede, here’s a snippet of what I was able to dream up:
You can practically hear the farmer who’s having a Giant Rat problem already…
The fun part of this, however, isn’t in the finished product but rather in what would be my first foray into learning Blender, and grabbing assets from SketchFab that weren’t quite right. Firstly, I need to properly credit the artist François Espagnet for his wonderfully stylish assets.
However, when I first went to import those nice windowed-wall sections, there seemed to be a glaring error in the context of my cozy nighttime scene:
Something doesn’t exactly scream “nighttime” here
So I figured “This is a great time to learn about how to change materials in a 3d modelling software. Or maybe it’s textures. I don’t quite know the name yet.” and opened up the asset to try and figure out what I was doing.
After some great “How to make glass materials in Blender” tutorials (so it was materials), my object looked great in the Render Preview screen, wait, viewport, I exported it and dragged it into Spoke, only to find my beautifully transparent windows now part of a colorless gray slab. And after a few hours, I was able to figure it out, with beautiful transparency. Mission Accomplished.
A little Dark, but voila! It works!
As for the Technical How, let’s talk details for those interested.
So in order make exporting / rendering / uploading one smooth process, one of the increasingly-adapted formats is that of the GL Transmission Format (glTF) and it’s binary partner, (GLB). Developed by The Khronos Group, the encode all of the relevant object info in a neat little .JSON package, capable of storing textures, geometry, animation and more, which is incredible.
That being said, the Blender-Render(er) isn’t entirely perfect and hasn’t caught up entirely to the featureset that Blender itself provides. While most of the aforementioned tutorials recommended using a Principled BSDF shader node and tweaking the Transmission all the way up and tweaking other stuff to taste, the current Blender Exporter breaks with certain parameters, one being 'Specularity’. After considerable time, ~40 export attempts, and some forum digging, I found the way to get some sloppy transparency working. Instructions as follows:
It’s not quite as beautiful as a great '‘glass’ material, but it gets the job done and let’s light through! There you have it. I’m looking forward to learning how to do it all properly, but for now I’m glad I got it done