VR Grocery Shopping


Concept, User Research, Persona Building, Journey Mapping, Wireframing, Prototyping, User Testing


10 weeks



/01 The Challenge

An Exploratory Project into the Feasibility of VR Grocery Shopping

For ten weeks, we explored the feasibility of a grocery shopping platform in Virtual Reality (VR). The exercise aimed to understand how users might interact within a virtual grocery shopping medium based on their experiences in the real world.

The past two years have accelerated our dependence on digital platforms, with consumers becoming accustomed to tech as a replacement for once vital in-person interactions. From the way we work, shop, or enjoy entertainment, consumers have developed an understanding that nearly everything can be done online.

The Key Question

Virtual Reality is no longer a nebulous technology found only in science fiction. It has become a tool available to the average consumer, but as an emerging technology, is still new enough that it poses challenges for users. It's intuitive yet clunky. It attempts to simulate how we interact in the physical world but utilizes new virtual tools and occurs within a virtual space. Though what we are capable of in VR environments is often an emulation of real human behavior, the concepts remain new to us.

Our challenge was to understand the capabilities of VR with regard to completing an activity that most everyone does and determine its viability in the context of a platform for grocery shopping.

/02 Discovery

Searching for Direction

Before we began user interviews, we had to consider two key questions:

  1. Who are our target users?
  2. How do we recruit participants?

We weighed the cost-benefit of outsourcing participants to a usability testing firm but ultimately chose to recruit via friends/family and cold calling. Given our project scope and early stage in generative research, it was faster, cheaper, and yielded areas of interest that we could continue to explore throughout our research.

Qualitative Research

We probed areas of interest from grocery shoppings by asking open-ended questions, giving us the benefit of allowing users to guide us towards issues that we may have overlooked while acquainting ourselves with the space. This was done with an eye towards how VR might be able to improve the grocery shopping experience.



Nearly all customers interviewed had poor experiences ordering fresh fruit/produce online.


Inventory disparity was reported as a major pain point for both the in-store and online shopping experience. For in-store shoppers, this meant being unable to find an item, while for online shoppers, inventory disparity meant online products didn’t reflect the actual quantity in the store.


Users noted that they often ask for help while shopping, usually to find an item that they can't locate on their own

A New Way of Couponing

Users reported increased usage of store-specific apps for coupons, discounts, and as an enhancement to the overall grocery shopping experience

(L) Our preliminary assumptions and their groupings on incorporation of VR into grocery shopping and (R) a spreadsheet of responses in the generative research phase

Reaching Outwards

Turning to the Internet

We turned to the Reddit community, hoping to gain insight from those already familiar with the technology and its uses. We created a survey to gauge interest in a VR grocery shopping experience and administered it to relevant communities. Those that participated included r/HTC_Vive, r/Oculus, and r/VirtualReality.

(L) A spreadsheet of our survey findings, (C) a detailed graph view of things shoppers value when grocery shopping, and (R) clustered groups of answers from online respondents

These were the key insights that would go on to drive our ideation phase:

Item Inspection

When asked "Why would you/wouldn't you be interested in VR grocery shopping?" the interaction with products and the ability to inspect from all angles was most noted response.

Steep Learning Curve

Those with less experience in headsets indicated greater hesitance to engaging with a VR grocery shopping experience.

Convenience Above All Else

Convenience is the primary motivating factor for determining where customers will choose to shop, and they expressed a willingness to pay for it.

Wayfinding Frustration

Users reported varying systems of categorization/store layout as the main drivers of frustrations with finding different items.

Two personas developed for alignment on user empathy

Problem Statement

As businesses adjust to new online consumer behavior and preferences, grocery stores have struggled to offer an enjoyable shopping experience from the comfort of one’s home.

A VR grocery shopping experience should offer not only the benefits of online shopping but also the satisfaction the customer comes to expect when purchasing in-store.

/03 Approach


Design Strategy

We approached our designs from the framework of recontextualizing – rather than reinventing – the grocery shopping experience. Two questions anchored our approach:

  1. How can we design this experience without completely overhauling users’ ideas of grocery shopping?
  2. How can we leverage users’ mental models of grocery shopping to offer them a new way of completing a task they’re already accustomed to doing?

Ideating Around Pain Points

In considering how a VR platform might improve areas of frustration in grocery shopping, we turned to those that our research highlighted as of interest:

  1. Wayfinding
  2. Efficiency and Time Productivity
  3. Item Interactivity
  4. Bridging the Learning Curve

Variation Through Sketching

We sketched various ideas that went beyond simply placing a grocery store into an immersive VR environment.

How Might We..

...enable movement for users within a VR context (if at all)?

...simulate the idea of a cart that holds users choices until ready to purchase?

...incorporate gestures that emulate behavior from a physical grocery shopping experience?

...personalize the experience unique to the individual?

A New Framework for Grocery Shopping

Our vision was to introduce users to a new way of finding and interacting with products via the Grocery Carousel, a Lazy Susan-like display of columns rotating around a fixed, central point. These columns would aim to mimic the aisles of a grocery store as if they were vertical instead of horizontal. Our first iterations imagined these as grid-like displays that could be rotated, from which a user could grab items, inspect them, and add to their cart – just as they would do from shelves in real life.

A 3D prototype of a reimagined grocery shopping experience created in Unity

We understood the need to distinguish between “levels” of the experience — allowing a user to progress through varying tiers of detail when viewing products. We aimed to draw parallels to the in-store setup by imagining the following, with each step bringing the user to a more detailed viewing and interactive ability:

General Carousel View

All item categories (columns) represented by a selection of groceries from that category, likened to one’s view when first entering a store and seeing various aisles

Detailed Shelf View

As if the user had walked into a particular aisle or section at the store and now possessed the ability to view more closely the items within it

Product Detail Page

A detailed page of item-specific information that would appear if a particular item was selected for further inspection

Designing for Interactivity

A major consideration remained with regard to user interest in a VR grocery shopping platform. Recalling that almost half of polled users noted the "ability to inspect items at every angle and understand the scale of products" as the main draw for such a product, we considered the following:

How Might We allow users unlock context-specific gestures without having to engage an additional menu to enter a different mode?

We were inspired by Apple, particularly its use of touch gestures on its devices. Recall the process by which an iPhone app signals that it is ready for moving or deletion – it begins to shake when tapped and held.

We built on this idea – a defined area where an object could be held and "anchored” – to unlock an additional set of gestures available for that particular item. We would call this new mode the “Sandbox."

It would offer the most detailed level of object manipulation, such as understanding the size, the ability to change quantity/volume, a context to evaluate the product at a more detailed level, etc. – and truly leverage the benefits of VR technology.


More of a “mode” than a page, enable users to manipulate 3D assets and add these assets to one’s cart (ie. halving or quartering a watermelon, of which any or all parts may be purchased)

Some sketching and brainstorming exercises on how to integrate gestures into product detail pages

The First Profound Affordance

With an understanding of how this reimagination might work, we generated a sitemap to more clearly envision the architecture of the environment. This highlighted the difficulties of mapping out a 3D VR experience on a 2D plane; it also meant giving thought to user movement through Virtual Reality experiences. We honed in on the sense of presence, which Mina C. Johnson-Glenberg referred to as the “first profound affordance” of Virtual Reality in her 2018 study Immersive VR and Education: Embodied Design Principles That Include Gesture and Hand Controls.

Delivering Content to the User

Consider the following: You're immersed in headset on a product detail page or the detailed shelf view of the dairy section. On a web or mobile interface, it would make sense that you might navigate to the frozen foods by backing out of the current selection, finding the new category, and then working your way once again through the options. However, in a VR setting, this navigation entails lots of movement – and increases the likelihood of what is called "simulator sickness."

We felt that a user having to manually navigate their way through the aisles of a store and find specific grocery items would not only defeat the entire purpose of a convenient product, but also subject users to excess motion. We considered how we might ease navigation between levels of the experience.

Clippy v2.0

We introduced the idea of a Smart Assistant, our alternative to the omni- yet optionally-present Siri or Alexa — willing and able to teleport one to a very particular part of the experience. We also added tabs of main categories (those represented by the columns of the carousel) that would allow users to reduce the steps needed to move around level-by-level.

/04 The Vision


An Exploratory Approach

As this was our first exploration into Virtual Reality, we evaluated the time it would take to create a prototyped experience, the constraints of free VR prototyping software, and remembered that our main priority was research. We decided on a low-fidelity prototype for exploratory purposes. This allowed us to focus on content rather than visual design while developing a more holistic understanding of the user journey.

DraftXR, AdobeXD, and ShapesXR (FKA Tvori)

We utilized the DraftXR plugin for AdobeXD, which allows users to leverage 2D assets into a 3D environment. Though it lacked the functionality and true feeling of presence within a VR environment, we felt comfortable mocking up most of what we had envisioned for the project. Paired with screenshots of our work in ShapesXR, we asked users to use a touch of imagination when progressing through a clickable prototype. Though we lacked the ability to utilize gesture control in conjunction with our VR headsets, we felt that a series of click-through screenshots demonstrating hand control and gestures worked well enough to communicate the functionality we were imagined.

Future State Storyboarding sketches illustrating an end-to-end user flow

Drawing from the Gaming Industry

Since we were using the Oculus Quest 2 VR headsets, we imported the native gestures that Oculus users were already accustomed to performing, such as point and pinch to select. With this basic framework for selection established, we turned to the VR gaming space to explore more engaging ways of interacting in headsets. We chose the space because video games are designed to be immersive, entertaining experiences. They must engage and hold onto the gamer: this means pushing the limits of either storytelling, graphics, mechanics, or a combination of all three.

Our approach to interactions was inspired by a game called SUPERHOT, wherein a player must destroy oncoming waves of enemies by punching, throwing objects, or shooting weapons.

For a SUPERHOT player, there is no pointing and clicking; instead, they must reach out with their arms and grab objects as though the latter were truly in front of them. We were intrigued by the gameplay mechanics of direct object interaction and intuited that it would translate well into our VR grocery shopping exploration.

Leveraging the Literature

Ditching the Controllers

Currently, VR leverages the use of dual controllers (one in each hand) for point and click interactions within the headset. Although these controllers utilize the basic principles of a computer mouse and are therefore precise and easy to learn, our findings indicated that this method will likely soon be a relic of the past – this echoes the sentiment of Meta, the parent company to Oculus, which released gesture controls in 2020 for the Quest headset.

As this approach related to our project, we decided to leverage the new technology and ditch the controllers, designing for a new experience controlled by only the body.

Direct Object Interaction

Our ideation on hand gestures was informed further by the work of Guerra et al. (2019) in Applied Sciences, Hand Gestures in Virtual and Augmented 3D Environments for Down Syndrome Users, which sought to test commonly used interactive 3D gestures between both Down’s Syndrome (DS) and neurotypical participants. We chose this study to drive our next decisions with two key areas in mind:

Reducing Cognitive Load

Design in a way that utilizes gestures similar to those used in real life, with the goal being the simulation of actions performed in a physical setting

Curb-Cut Effect

Although initially designed to benefit the disabled, accessible designs can produce a ripple effect of benefit for a much larger group.

The Guerra study suggests that there are differences in the successful execution of different gestures; more importantly, that due to limitations in motor skills, 3D gestures should involve the whole hand as opposed to fingers.

The impact of Guerra's work on our project was twofold:

  1. It validated our decision to incorporate direct object interaction, establishing that cognitive ability would not be a critical performance factor.
  2. We would expand 3D accessibility by improving the native Oculus gesture of point-and-pinch to select, from thumb + index finger to thumb + all four fingers. This more closely resembles the gesture of grabbing, which demonstrated a higher success rate in the study.
A comparison of hand gestures, from the original point-and-pinch (L) to the newer "grab" (R)

We designed the direct interaction experience by repositioning the user closer to objects, about an arm’s length. By closing the distance between user and object, the former would be capable of clearing the distance to reach forward and directly grab hold of objects. In the screenshot below, a prototype demonstrates the grabbing of a yellow bag of candy. Just as easy as it is for a user to grab an object directly in front of her, she can also select objects further away. To select the can of chips located on the top shelf, the user would raise her hand to the object and perform the grab gesture, snapping the item into her palm.

We also integrated the following features, believing that these additions would create the most advantageous environment for users and minimize interaction costs:

Highlighted objects: a visual indication that certain objects have become interactive.

Voice command: to be used for activation of a smart assistant that mirrors the presence of a grocery store employee, fulfilling a role similar to that of Siri or Alexa.

Early Troubleshooting

As we worked to expand these accessibility principles into our interactive Sandbox Mode, we highlighted two areas of concern:

Hand shake

Prolonged raising of the arm could become tiresome, resulting in imperfect or accidental actions

Imprecise Hand Tracking

Without guidance prompts, it would be difficult to know exactly how the object could be manipulated

We improved on these issues by again turning to the video game industry. In an online game of billiards, for example, a dotted line emerges to guide players on where the ball will travel based on the cue’s positioning. The dotted line is easily understood by players of all levels and conveys enough information for an accurate strike.

We believed that incorporating this dotted line functionality and the use of the hand (instead finger gestures) would help to resolve the shaky hand and imprecise tracking issues. The dotted line creates a more accurate, true-to-intent guideline, thereby reducing the attempts needed for the desired outcome. Furthermore, utilizing a large sweeping hand motion would allow users to reduce time spent with their arm raised as opposed to smaller finger gestures.

A step-by-step illustration of interaction in Sandbox mode. The user can be seen cutting a watermelon in half and adding the selected half to their cart, which affords a highlight cue for visual confirmation.

Incorporating Gesture Controls

We created prototypes according to the key stages we identified in our user flow, incorporating the features we felt were vital for shipping an MVP.

A high-level view of our click through prototype, generated on AdobeXD with the help of ShapesXR and DraftXR

Carousel - Home Shelf and 'Categories' Menu

The carousel view would act as the opening scene for users: a "main menu" from which one begins their experience, as though browsing through grocery aisles in a store.

  • Overview of all categories and the ability to browse within them
  • View recommended and commonly purchased items with Home Shelf
  • Poke to select a button that takes users to a detailed shelf view, showcasing all items in the category
  • Swipe to rotate individual shelves and view items on each
  • Grab item to enter Sandbox mode
  • Grab item to add to cart

Detailed Shelf View, PDP & Checkout

We utilized the DraftXR plugin to prototype portions of the user journey and create photorealistic representations of our imagined 3D object fidelity. We created an array to represent a store shelf and the items on it. A user’s cart would always be right in front of them, affording product additions simply by grabbing items from the shelf and releasing them into it, much like an in-person experience.

  • Prototype products and portions of the user journey to create photo-realistic representations of our imagined 3D object fidelity

Sandbox Mode

Our main area of opportunity to explore new gestures and interactions. A study published in Applied Sciences, Design of 3d Microgestures for Commands in Virtual Reality or Augmented Reality, by Li et. al (2021) found that participants are willing to learn new, unfamiliar gestures for some commands instead of more familiar ones – a concept we wanted to test with users.

  • Slice to adjust the quantity of an item
  • Grab item to add to cart

Smart Assistant (Broccoli Rob)

To make the experience accessible and available to someone in a stationary position and minimize any discomfort one might feel by needing to navigate around the test layout, we envisioned that users could employ voice commands for teleportation to various stages in the shopping experience.

  • Voice-activated smart assistant available throughout the shopping experience
  • Voice command for locomotion
/05 Validation

User Testing

We conducted a hybrid method of testing, guiding our participants through a user flow of the end-to-end shopping experience compiled in AdobeXD. Participants were encouraged to think aloud and gesture as though interacting with the 3D objects directly.


Usability tasks were developed with the following goals in mind:

  • Understand if the environment afforded users adequate visual cues for how to manipulate and interact with 3D objects and interfaces
  • Test user ability to navigate and understand the layout
  • Explore how users would utilize context-specific gestures
  • Unpack whether users preferred different gestures


We took a page out of the Oculus playbook and its First Steps with Hand Tracking to orient new users with the immersive experience. We created a tutorial video to familiarize users with the reimagined shopping environment, pointing out key features and gestures. There were two main considerations for this:

VR Experience

Most users interviewed didn't have extensive VR experience. The intent of the tutorial would be to prime them for what they could expect to encounter in the testing environment.


We wanted users to receive the same set of instructions and accompanying visuals. It would also reduce the number of variables that could lead to confusion about how to interact in the environment.

Photos from mixed-method testing: (L) A participant works through the the hybrid user flow on a traditional computer screen, (B) a participant leverages the Oculus Quest 2 to view prototype in immersive 3D

Task-by-Task Analysis

We tested our prototype on five users through a mixed-method format of VR (in headset) and 2D (on a computer screen). We moderated the usability testing and asked our users to perform a series of tasks covering the user journey from login to checkout in three phases:

  1. Homeshelf view to explore carousels
  2. Detailed shelf view to explore large-scale UI
  3. Sandbox-mode to explore gestures and natural intuition

Place Candy in Cart from Homeshelf // Navigate to All Snacks

Results nudged us to reconsider the signifiers we'd included to help with navigation around the carousel and into deeper levels. All users stated that they knew it could be rotated but only a few could determine how the apparatus moved. Progressing to a more detailed view presented challenges to users.

Grab: Confirmed

All users gestured with their arms forward as if they were grabbing the object and proceeding to drop it into the car in front of them.

Navigation: Unconfirmed

Progressing to the Detailed Shelf View was not as intuitive, with users citing uncertainty of how to initiate – swiping, pointing, or grabbing and spinning – as the cause.

Check Nutrition Facts // Place Item in Cart

Users struggled with text since they were unsure whether the text was written to be read or clickable as a link for further info. Furthermore, some users failed to place the chips into their cart from the Product Detail Page when moments prior, they'd succeeded in performing the same gesture with a bag of candy. We learned that this was due to confusion among users in discerning whether or not the cart was still interactive.

I wish the prices were more apparent so I wouldn't have to go all the way to the Product Detail Page to find them. Is there a way to have the price appear when I grab the product initially off the shelf?
- Testing participant

Activate Digital Assistant // Cut Watermelon

We anticipated users would struggle with this sequence of tasks, yet to our surprise, all users completed both activities and required very little assistance.

Reliance on Voice Control

Once users were familiarized with voice control, they routinely used or asked to use the feature for the remainder of the activities.

Instinctive Gestures

Testers appeared to act instinctively when cutting, all performing the full hand swipe motion with ease and minimal instruction.

I’d probably use my hand to slice it in half like Fruit Ninja.
- Testing participant

Inspect Cart and Remove Item // Proceed to Checkout

All users successfully removed the chips from their cart on the first attempt. More importantly, they all motioned the same (if not identical) gesture: grabbing the item and tossing it out of view to remove.

All users progressed without issue through the checkout process. Some voiced concerns regarding security, given their payment method had their information pre-filled; however, this was more closely linked to our prototype fidelity rather than what we'd envisioned in a working product. We also realized that we'd not adequately built out the final order confirmation screen, which, interestingly, users have become accustomed to seeing at the end of a purchase and all noted as missing.

This was very cool and definitely more engaging than shopping online or picking groceries through an app.
- Testing participant

After the test, we had users fill out a System Usability Scale to gauge each of their subjective assessments on usability. The average score was 72, placing the experience at just about average for usability.

/06 Refining our Execution

Pivoting & Further Testing

Final Thoughts

This project was a lesson in challenging decisions from prior steps and considering how to adjust moving forward. Our testing phase proved no different — due to imperfect testing methods and the inability to truly test the gestures we’d imagined within the VR environment, we found the space between success and failure on certain usability tasks was sometimes a bit too ambiguous, and lacked the type of insight that would drive new iterations. However, we found the feedback from our think-aloud testing to be immensely valuable. We’ll highlight some of these areas below and offer some reflections on how they might be improved.

Broccoli Rob

We learned that our choice of Smart Assistant, cheekily named Broccoli Rob, might impose a type of pressure on users to make what are generally considered “smart” choices. Users were split on whether or not they wanted to know that the Assistant could be utilized at any point in the process, and thus have a constant presence somewhere in their field of view. Some suggested that a feature that checks in on shoppers frequently throughout their experience could be helpful, but we're wary that this might contribute to alert fatigue. However, the value of incorporating an Assistant tool to simplify steps in the process has a large potential upside and was regarded as a positive feature by participants in our testing. We can imagine a more active role in moving between categories of groceries, checking out, adding items to one’s cart, or comparing between products with the help of a Smart Assistant.

Information Display

VR environments are a highly immersive and stimulating experience. There is a fine line between providing adequate levels of information for a user to progress and including too much. Some users noted the desire to be able to view prices from the Carousel View and Detailed Shelf View. We’re not sure this is the answer, as we iterated earlier on price inclusion and saw very quickly that it was a lot for a user to view at once. We can envision the ability to see prices when an item is picked up, or in the case of the cart, when products enter the field of view. We discovered a similar feeling among our participants when it came to Sandbox Mode — they were more confused than aided by the UI elements, so we removed them to return the focus purely back on the gestures.

/07 What we Learned

And Where to Go From Here

Acknowledging our Constraints

It’s important to note that this project was by no means exhaustive — in fact, in light of its speculative nature and our limited resources in prototyping and testing in VR environments, it was never going to be. Our inability to test gestures among our subjects in the headset, paired with being forced to rely on multiple mediums to articulate our vision for what grocery shopping in VR might look like, posed challenges.

However, we feel that this is a great jumping-off point and just the start of where further exploration in the space could lead.

In Response to Our Initial Challenge

As it stands today, we don't believe that a product like this is feasible for companies to develop while hoping for a substantial return. Our research and testing indicated that while the interest for a VR-based grocery shopping platform exists, it is only found in niche market segments – namely those already accustomed to spending time in headsets and those who utilize the technology in other parts of their lives. The barriers to entry remain yet too high and the day-to-day use-cases too few, and the trend fails to extend to general audiences. The headsets are still too clunky, expensive, and seen as novelty when it comes to the general population, especially given our access to well-functioning delivery platforms like Postmates, Doordash, etc. which are native to the now-ubiquitous smartphone.

That said, as the technology develops and grows from novelty to a capable platform that affords a unique virtual immersion, we see the possibilities of where a project like this could lead. AmazonGo is a good example: a true grab-and-go alternative to traditional check-out processes driven by contactless convenience and “just walk out” technology. We don’t believe it is too far a leap to imagine a model that functions similarly, only purchased from the comfort of one’s headset. A quick browse through VR forums will show the leaps the technology has made just since the time of this writing – and its progress is not slowing.

Continued Exploration in the Space

An experience must be usable before it is enjoyable. To that end, we feel that we made strides towards usability: recontextualizing the grocery shopping experience and communicating those ideas in a VR environment. In the future, we imagine a platform capable of complete personalization and can't overstate the value of enabling users to tailor the platform to their own needs: perhaps incorporating the ability to set dietary preferences and restrictions, like setting alerts for foods with processed sugars or those that are high in sodium.

We’d also like to note the exploratory benefits of interactive VR assets, such as understanding size and weight differences between pieces of protein or the textural differences between a crumbly queso fresco and a more solid block of mozzarella. There is much to be gained from the immersive level of detail that Virtual Reality provides, and we believe that the ability to interact, understand, and customize products for one’s order at this level will restore many of the intangibles that are lost in the transition to online grocery shopping.

ChallengeDiscoveryApproachVisionValidationRefiningWhat We Learned