Home About the Journal Latest Work Current Issue Archive Special Issues Editorial Board


2021,  3 (5):   407 - 422

Published Date:2021-10-20 DOI: 10.1016/j.vrih.2021.09.002


This paper shows how current collaborative virtual environments (VEs) such as Mozilla Hubs and AltspaceVR can aid in the task of requirements gathering in VR for simulation and training.
We performed a qualitative study on our use of these technologies in the requirements gathering of two projects.
Our results show that requirements gathering in virtual reality has an impact on the process of requirements identification. We report advantages and shortcomings that will be of interest to future practitioners. For example, we found that VR sessions for requirements gathering in current VEs could benefit from better pointers and better sound quality.
Current VEs are useful for the requirements gathering task in the development of VR simulators and VR training environments.


1 Introduction
Virtual reality (VR) technologies are of potential benefit in a wide variety of applications, given their availability and affordability. Although it has been reported that VR makes a difference in several industries and processes[1], novel commodity VR hardware opens new possibilities in fields that have not yet been studied in detail. One such application is requirements elicitation, the identification of requirements between stakeholders and software developers. In this work, we study the application of VR technologies within this field, particularly in the development of VR simulators and training environments. VR simulator development is a suitable and rich domain to study, as the VR technology for production could be used for the development phases. Such software involves VR technologies, and it is of interest to see how such development could benefit from using VR technologies in the early stages. Moreover, simulators are required to reproduce realistic characteristics of a system at some level—particularly a physical 3D space—and VR is an appropriate technology to portray such characteristics. For example, 360° pictures can be produced of any 3D space to easily create virtual walkthroughs during the early stages of any development. At the beginning of any software development task, a team of software analysts, software developers, and end users, among other stakeholders, meet to define the requirements of their system. Such a team may even be geographically distributed—an increasingly common situation—hence there may be limitations in the communication and coordination of the communication channel[2]. Early collaborative prototypes of the simulation environment that have minimum functionality but are visually compelling can be created, providing an advantage over traditional elicitation techniques. For example, a VR reproduction of an environment can be visited by a geographically dispersed team, avoiding any transportation costs for the participants.
This paper reports the lessons that we learned through using current VR technologies (i.e., Mozilla Hubs and AltspaceVR) in the requirements gathering stage of two of our projects. These lessons can orient other practitioners in their development work and inspire future work in this field. It is divided as follows: first, we present related work. Then, we describe in detail the method we used to study our VR interventions in the requirements gathering stage of two projects. Later, we discuss the lessons we learned from these interventions, and finally, we present the conclusions and suggestions for future work.
2 Related work
In this section, we discuss previous work on collaborative mixed realities (MR) and techniques for requirements elicitation. Bhimani[3] studied how VR could generally help in a requirements elicitation process. He concluded that VR could be beneficial for projects that involve physical spaces, where it is necessary to consider architectural, geographical, or visual issues. He also warned of the extra costs and effort required to develop the 3D scenario for these sessions. We concentrate our efforts on simulators and trainers, a subset of the VR environments that can be created from physical environments. We also aim to extend the discussion to contextual elements, as we believe that VR could also facilitate such discussion with stakeholders at early stages of development. Finally, we limit the cost and effort in the creation of the 3D scene for requirements elicitation, as we will describe in Section 3. The challenges of collaboration in VR have been studied at different levels of detail and with different technologies[4-6]. In particular, it is a challenge to concentrate on the task and not the technology, and to facilitate the use of the technology for all collaborators. There are also several techniques that have been used to facilitate remote collaboration in MR. Live 3D reconstruction has been used to create a shared environment among collaborators[7]. Communication cues in real life, such as movements, hand gestures, people's appearance, and pronunciation features, have been studied to identify shortcomings for asynchronous collaboration[4]. Understanding aids, such as frames for areas of interest, rays to point certain features, and other methods of achieving awareness, are reported in [8]. Although it may be useful to create a collaborative tool that considers these studies, we were more interested in what current and common tools have to offer for requirements gathering. Nevertheless, these studies allowed us to select—in a more informed way—the common tools for our study. There have been studies that include the notion of requirements elicitation processes for VR applications[9-11], as well as studies on the effectiveness of elicitation techniques in software engineering[2] and standards on how requirements should be documented[12]. Our work is inspired by the elicitation process that was followed by [10] to develop a VR theater for children, the process followed by [11] in the development of a VR app for industrial design, and the strategy used by [3] during the elicitation phase of a project using landmarks for navigational guidance. Based on these works, and the current state of the practice in agile methodologies, practitioners elicit requirements by means of storyboards[10,11], interviews, visualizations of users in their real environments[3], and focus discussion groups[9], among other methods. However, a missing element in these works is the remote collaboration among stakeholders, which is an important issue (as mentioned in[2]) that is included within our focus, as mentioned in Section 3.1.
3 Method
3.1 VR-based requirements gathering for VR simulators and training
We included VR technologies within the requirements gathering processes of two on-going projects to ascertain the benefits and shortcomings of such technologies. Both development processes employed standard techniques for requirements gathering[2], such as interviews, document analysis, focus groups, field trips, brainstorming, and storyboards; these techniques, however, were mostly applied through teleconferences, given the lockdowns in place owing to COVID-19. In addition to these traditional methods, we conducted two collaborative sessions with VR technologies, as follows:
● An initial VR session (Session 1) with the minimum set of technical requirements and minimum preparation from the point of view of the stakeholders, to facilitate the introduction of VR technologies.
● A more elaborate VR session (Session 2) featuring a scene closer to the one expected in the final software product.
The main goal of our method is to capture the requirements of geographically distributed teams. Our hypothesis in this study is that VR methods, in general, and 3D scenarios, in particular, could facilitate the requirements elicitation process for geographically dispersed teams.
Users participated in these sessions by means of PC desktops, as they were the most available and easy technology for all participants. Although Head Mounted Displays (HMDs) could also be used and have the potential of a more immersive experience, they were not widely available to the stakeholders, they require extra training for novice users, sharing of such devices is currently unsafe, and there were insufficient HMDs for all stakeholders involved. Notably, there are limitations in our analysis of the advantages of using VR, as these were obtained in a study using only PCs and not HMDs. Therefore, there are no results that reflect the highest possible immersion achieved when using HMDs. For example, if HMDs had been used, it is possible that the results relating to immersion would have been better, as would the perception of the physical characteristics of the objects, such as their dimensions. This is especially true for Session 2, which was conducted in AltSpaceVR, as in this tool it is possible to take advantage of higher fidelity interaction advantages, such as pointing to objects with the avatars' hands. One area of our planned future work is to provide such devices to all stakeholders, conduct proper training with local supervision, and test our hypothesis in a more immersive scenario. We expect that such results will demonstrate that meetings could be more effective than the ones discussed in this study.
With proper permission from the participants, we recorded these sessions to later perform a qualitative analysis and extract lessons from these observations[2,13]. Such lessons were extracted by identifying important incidents in the videos—either interactions among participants or uses of the technology. Incidents from each project were classified, summarized, and discussed between researchers of the two projects to extract the lessons learned. We selected this method of study because of the exploratory nature of the use of these technologies and the differences between projects (i.e., main topics, team members, meeting schedules, and number of meetings). Moreover, observations such as these provide a close-to-real scenario, where we can better identify the main features of these technologies in these domains. In our analysis, we made an effort to extract lessons that were applicable to any domain.
3.1.1 Session 1-Mozilla Hubs
Session 1 is designed as a fast proof-of-concept that VR technologies could be useful for the particular team and the particular domain at hand. It should show the real context of the simulator, if possible, with all elements relevant for the simulation. It should be easy to develop and easy to use for all stakeholders, especially the non-technical ones. Within such an environment, the requirements gathering session should focus attention on the possibilities and limitations of the technology and facilitate a discussion on the development agenda. To avoid false expectations and shortcomings, it should also clarify that the fast proof experience will be very different from the final one in development. We opted for a shared experience in Mozilla Hubs based on 360° pictures that we created from panoramas taken from clients. Appendix A describes the process that we followed for the creation of such an experience, based on the remote collaboration between clients and developers owing to the Covid-19 related lockdowns.
3.1.2 Session 2-AltSpaceVR
Assuming that the development team finds, from Session 1, that VR is useful for requirements gathering, Session 2 is designed to provide a more compelling VR experience. Ideally, this environment should allow stakeholders to move freely in the environment and further discuss the expected functionality. A 3D scene is created for the shared experience, similar to the 3D space required for the final product in development. The requirements gathering session should consider all information available at that moment, and create an immersive environment where understanding could be validated and more information could be gathered. In this case, we required a more complex VR setup, one that required installation and basic training. We selected Unity to create an initial 3D scene of the simulator and AltspaceVR to share it among the stakeholders. Appendix B describes the process that we followed for the setup of this session. The two VR sessions are to be introduced to stakeholders whenever they are available within the development process, although it is useful to gather some initial information prior to the use of VR. Some examples of the information that should be gathered before VR sessions are conducted are as follows:
● Initial introductions of stakeholders and development team.
● Initial understanding of the purpose of the development.
● Expected benefits from the use of VR.
● Initial scenarios where the training will take place.
To facilitate access, both sessions should be available to stakeholders with PCs. This setup also facilitates the recording of these sessions by means of screen capture software, such as OBS Studio1, in a PC connected to the sessions.
3.2 Empirical study
We performed an empirical study into the use of VR technologies in the requirements elicitation processes of two of our ongoing projects. We selected these projects given their scenarios (i.e., real 3D spaces with many elements of interest), the requirements from stakeholders (i.e., teach learners how to use these environments and their equipment), and their need for interaction between learners and experts. Their domains are also sufficiently different to test the convenience of our approach in different scenarios.
Our developers are proficient in VR technologies, hence they could create experiences for all stakeholders and train them in the use of these technologies where necessary, albeit within the current limitations due the pandemic. A protocol for capturing information on the requirement elicitation process was defined, where we planned each session in advance, along with the main questions that we wanted answers to. This protocol helped us to document and save all the information collected from each meeting and identify key moments. We aimed for an agile, iterative development process, thus there could be further sessions also related to requirements elicitation.
VR sessions were used as soon as initial information was gathered and the environments were ready to visit, as shown in the Figure 1. Although the level of information gathered before a VR session varied, we were more interested in how stakeholders reacted to the VR experiences, and how these experiences could complement the traditional methods for requirements elicitation.
After the VR sessions were recorded, the videos were analyzed to identify the advantages and shortcomings of each experience. Such lessons are described in Sections 5.1 and 5.2. Below is a description of the two projects.
3.2.1 Simulator for ship control training
We are developing a simulator of the control room of a ship in our Navy (P1 henceforth)[14]. Our purpose is to facilitate training of both normal and unusual situations that a Ship Chief of Engineering could encounter. A VR environment will facilitate such training and include situations that are difficult to reproduce in reality. We also envision the use of passive haptics for a mock-up of the control console to enrich the purely virtual experience. Requirements gathering for this project was a collaborative process between six Navy officials and two developers. We used teleconferences, document analysis, and remote visits to real ships to elicit the information for our simulator. Additionally, we performed two VR sessions of about one hour each, as mentioned in Section 3, to investigate the benefits of VR in this process. Figure 2 shows the environment for Session 1, a Mozilla Hubs environment with a skybox of the control room of our simulator. Figure 3 shows the environment for Session 2, a simple 3D mockup of the control room of the Navy's ship. We conducted one session in Mozilla Hubs and one in AltSpaceVR. Notably, although several HMDs were available for Session 2, none were used owing to the lack of internal permissions from the navy to install the application on the devices.
3.2.2 Training environment for baby delivery
We aimed to provide a training environment for physicians, in which they can make decisions on the first minute of a baby's life after delivery[15] (P2 henceforth). The VR environment will facilitate training in remote areas of our country, while providing a uniform set of training cases.
Requirements elicitation for this project regularly involved two physicians, two developers, and the project coordinator. After some initial teleconferences where the basic elements of the project were discussed, thorough document analysis of the current training methods was performed, and observations were taken from video footage of students during training, we performed two one-hour VR sessions, as mentioned in Section 3. Figure 4 shows the environment for Session 1, a Mozilla Hubs environment with a skybox of a delivery room. Figure 5 shows the environment for Session 2, an AltSpaceVR experience with a 3D model of a baby delivery room, imported from Unity. We conducted one session in Mozilla Hubs and two in AltSpaceVR, although the first one had to rely on just streaming from one viewpoint owing to technical issues in some of the participants' installations. Most sessions were accessed through PC clients.
4 Results
Initial sessions through teleconference in both projects identified that VR technologies were useful for the requirements elicitation process; as the technology was to be included in the final result, it was a way to learn about the technology before the simulator was built, it was a cheaper and more efficient way to visit the real environments, and it allowed stakeholders to be more familiar with the technology. In this way, we could include both P1 and P2 in this analysis. Given the current pandemic, it was not easy to move equipment from our lab to the main places to be simulated. Therefore, we could not use our 360° cameras to capture the environments with such technology, as we had planned. Instead, we asked stakeholders to take panoramic pictures of such environments, then based on those pictures, we created the environments as skyboxes. Given the feedback from stakeholders (i.e., stakeholders did not complain about the visualization of these environments, the visualization did not preclude their participation in the sessions, and stakeholders did not mention any particular issue regarding the environments at this stage of the project), the scenes we created were adequate for the experiences. In other words, the quality achieved in these scenarios allowed us to create a session on requirements gathering, where the focus was not drawn away by the quality of the VR scenarios. Session 1 in both projects was conducted after several teleconferences with stakeholders, where we discussed a set of questions related to the initial requirements and concepts of their domains. VR sessions were a novelty for most stakeholders, hence these sessions were received with some awe. Although the skyboxes of each context were a bit distorted, none of the stakeholders complained about that. Within these environments, stakeholders were very eager to talk about their particular contexts and requirements.
After analyzing 5h of the video footage of the sessions described above, we logged the incidents from each one and performed a simple classification based on our findings. Owing to the exploratory nature of this study, our classification is very simple and divides the incidents into negative and positive. Additionally, negative incidents are subdivided into user adaptation and tool adaptation.
Negative incidents: As the name suggests, these are incidents that represent a difficulty in the task of capturing requirements
User adaptation: These are incidents that represent the technical difficulties that were presented to the user in understanding how to use the application.
Tool adaptation: These are incidents that represent the complications of the features available in the tool when using it for our sessions (i.e., capturing the requirements of geographically distributed teams).
Positive incidents: As the name suggests, these are incidents that represent an advantage in the task of capturing requirements.
In total, we identified 50 incidents from the videos. The following sections show a selection of those incidents.
4.1 Session 1-Mozilla Hubs
4.1.1 Negative incidents
User adaptation:
(1) When Stakeholder A entered the link of the Mozilla Hubs environment, they were instructed to select a name and an avatar. This was confusing because this user did not know that this option had to be selected to enter the virtual environment. Stakeholder A expected to enter the environment directly.
(2) As we were using a traditional teleconference tool (Meet/Teams) as the main communication channel, there were complications with the audio because having two tools open (Meet/Teams and Mozilla Hubs) resulted in an echo. Owing to this undesirable effect, the stakeholders and developers were asked to disable audio in Mozilla Hubs. However, Stakeholder B did not understand how to do that. Therefore, it was necessary for Developer A to share their screen with one of our technicians to provide support.
Tool adaptation:
(3) Several stakeholders experienced some problems with spatialized sound, as they could not hear a speaker in the environment clearly, depending on the distance.
(4) Stakeholder C was talking about a certain object without using the laser. The other Stakeholders and Developers did not know which item Stakeholder C was referring to.
(5) Stakeholder D, as shown in Figure 6, could not see the laser of Stakeholder E (avatar in Figure 6).
4.1.2 Positive incidents
(6) Several stakeholders took a moment to walk around the virtual scenario.
(7) As shown in Figure 7a, Stakeholder F is using the laser to indicate the pregnant woman on the gurney.
(8) As shown in Figure 7b, Stakeholder G is using the laser to indicate the engineering console.
(9) In P1, Stakeholder H indicated that the environment should be wider and proposed adding a corridor to the left of the virtual environment.
(10) Stakeholder G indicated that "the trainer should be where Stakeholder I is standing right now, and she should give the following instruc-tions..."
(11) Stakeholder L explained the location of some objects, "If you turn around, you should find this object."
(12) Stakeholder M indicated that some objects in the collaborative virtual environment should be added. Using the laser pen, Stakeholder O drew two people and one table, as shown in Figure 8.
(13) Stakeholder N indicated that some objects in the collaborative virtual environment should be relocated for the simulator final scenario.
(14) Stakeholder P, using the laser pen, defined the objects that were not relevant and could be eliminated in the future 3D design of the final environment. In particular, as shown in Figure 8, some objects are marked with a red cross, such as the table on the right.
4.2 Session 2-AltSpaceVR
4.2.1 Negative incidents
User adaptation:
(15) Stakeholder A could not install AltSpaceVR as it is not available for Mac.
(16) Stakeholder B could not install AltSpaceVR and Steam until detailed instructions and support were provided, owing to a lack of knowledge of these tools.
(17) Stakeholder C had problems understanding how to control the camera rotation as there are two possible techniques and they had trouble switching between them. The techniques are
a. Camera with rotation defined by the position of the avatar and controlled by the keyboard arrows.
b. Camera with rotation through mouse movement.
Tool adaptation:
(18) Stakeholder D had to create their avatar and register as a friend of the main presenter during session 2.
(19) Stakeholder A did not know how to enter the environment of the main presenter.
(20) Developer E, who was the owner of the shared space, was not connected at the beginning of the meeting. This blocked the collaborative session and none of the other members could enter the shared space until the owner could connect.
(21). Stakeholder F could not point at objects in the virtual environment, as AltSpaceVR has neither a laser in custom environments, nor interaction to point on a PC.
(22) There is no option to point out items, even with one hand on a PC; this is only possible when using HMDs.
(23) When Stakeholder G referred to an object, the rest of the Stakeholders and Developers would go to Stakeholder G's location to view the object.
4.2.2 Positive incidents
(24) Stakeholders could visit an environment closer to that of the final simulator, as shown in Figure 9.
(25) Stakeholders and Developers moved around the 3D scenes in a more compelling way.
(26) Stakeholder H indicated that the flowmeter, shown in Figure 10, was too far from the baby because, in a real scenario, this element should located next to the newborn.
(27) Stakeholder I indicated that the blue rubber pear, shown in Figure 11, was very large relative to the baby.
(28) Stakeholder J indicated that there were some supply cabinets and a table with a computer that should not be present in the virtual scene.
(29) Stakeholders A and C began to imitate a case to show the rest of stakeholders and developers what the flow of the simulation was expected to look like. Stakeholder A acted as a student and Stakeholder C acted as the physician. Stakeholder C presented a clinical case to Stakeholder A, Stakeholder A asked questions regarding the case, and Stakeholder C responded. With regards to the answers, Stakeholder A began to say what they would do, referring to elements of the environment, such as, "I would select these elements here on the table," or "I would take the baby from here and place them in this location."
5 Lessons learned
Based on the results shown in the previous section, we conducted an analysis of the main lessons learned in each session.
5.1 Main lessons from Session 1
Even though we chose Mozilla Hubs owing to its ease of use, it was still challenging to some non-technical participants. We decided to include the basic training on Mozilla Hubs in Session 1, but some stakeholders felt that it was a waste of time. As a lesson, a separate session for training is necessary, even in a very simple environment such as Mozilla Hubs. In particular, we consider that all negative user adaptation incidents should be taken to define the content of a separate future training session.
Additionally, as shown above in incident 3, we experienced some issues with the spatialized sound in Mozilla Hubs, as not all participants could listen to a speaker in the environment. We decided to mute the audio in Mozilla Hubs altogether and use an alternative sound channel, i.e., Microsoft Teams with standard sound, to ensure that all stakeholders had the best audio possible. However, this also increased the complexity of the setup as we had to ask participants to mute the sound in Mozilla Hubs, which caused some confusion for some participants.
The pace in which stakeholders talked during Session 1 was a challenge for developers. In particular, domain experts were discussing their surroundings as if they were there, which was difficult for other participants to follow, given the limitations in following the gaze of other avatars or the elements they were referring to. According to incident 4, domain experts frequently used expressions such as "this object," without properly pointing or allowing time for other participants to become aware of the object of interest.
Although participants used the laser pen in Mozilla Hubs, as described in incident 5, it was sometimes difficult to follow owing to the small size, transparency, low contrast with the surroundings, or occlusion with other participants. A more "intelligent" laser would be useful. In fact, we entered the source code of Mozilla Hubs[16] and were able to modify the laser pen by increasing its opacity and size from the radius of the sphere that defines it. This change is available in the forked repository[17]. However, this modification was made using a local server and the default Mozilla Hubs scene. It was not possible to upload these changes and use a server for the personalized environment of each project, as a cloud service would need to be contracted for this, and the associated cost was not covered by the project budget. However, as it was possible to change the code to accommodate this, future work will be conducted to assess the impact of this change (Figure 12).
Additionally, Session 1 in both projects resulted in the production of a list of the main objects in the simulation, and an overall dialog making suggestions on how to improve the final simulation. The stakeholders found the Mozilla Hubs laser (pointer), which allowed them to direct the discussion to specific elements in the environment—as described in incident 7 and 8—very useful. They discussed several issues around elements in the simulator and spatial distribution of the environment, such as the following:
● Appearance and dimensions of the final environment (described in incident 9).
● Spatial distribution of elements in the realistic locations (described in incident 10 and 11).
● Objects that should be present in the simulation, objects that could be moved, missing objects, and irrelevant objects (described in incident 12, 13, and 14).
Another advantage of using Mozilla Hubs, as described in incident 6, was that we could visit and walk around the real location of each project in a more economical and efficient way, especially considering the impossibility of physical travel due to the Covid-19 pandemic.
Additionally, Session 1 of P2 was smoother and more beneficial than Session 1 of P1. By analyzing the comments, we concluded that developers were better prepared and had more information on the simulator's domain in the case of P2 than in P1. Therefore, we recommend an initial understanding of the problem domain before Session 1 takes place.
5.2 Main lessons from Session 2
Although we gave instructions prior, the first Session 2 in P2 was unsuccessful for some users owing to technical issues and misunderstandings on how to use the technology, as explained above as negative incidents (15 and 16). Therefore, as a lesson, we consider that it is necessary to have proper on-site technical support to install and create accounts for each user. We corrected such issues for a second Session 2 in P2. This helped us to ensure that stakeholders had the software properly installed, accounts properly created, and user configuration options set. However, as described in incident 17, some users had problems understanding the control of camera rotation. Moreover, as mentioned in incident 18, some stakeholders created the avatar and added the owner of the environment as a friend during the meeting; additionally, as described in incident 19, they had problems entering the environment of the main presenter. These incidents represented extra effort and time required during the session. Thus, a separate session for training is necessary. In particular, we consider that incidents 17, 18, and 19 should be taken to define the content of this training session, in which users learn how to:
- Create their own avatar.
- Add a new friend.
- Enter the custom environment of a friend.
- Use the camera controls inside the custom environment.
As described in incidents 21, 22, and 23, it is important to note that AltSpaceVR does not have a feature to point to and direct the discussion to specific objects in custom spaces. This is a disadvantage that made it difficult to determine the object that someone was referring to. To do this, users had to follow the location of the speaker and determine the object that they were talking about.
As mentioned in incident 24, AltSpaceVR allowed us to visit a 3D scene that represented a mock-up of the final environment. Stakeholders found the environment to be very compelling and understood the differences between the mock-up in AltSpaceVR and the expected final product. Additionally, as described in incident 25, users have a more compelling way to move around the 3D scenes in this environment, and stakeholders took advantage of that feature to explain elements of the scene in detail. In particular, this session was designed to discuss important objects in the simulation, issues with the 3D models, and their behavior in the final product. It was possible to discuss the spatial distribution of the environment and its elements. This session allowed us to vividly discuss and identify more details of each simulated object, as follows:
● The appearance and its relative size, as described in incident 27, where we discussed the size that one element should be in relation to the baby.
● A verification of the location in the 3D scene, as mentioned in incident 26.
● Its behavior or importance in the simulation, as described in incident 29.
Additionally, it allowed us to define a second iteration to include or remove missing and irrelevant objects, respectively. For example, as mentioned in incident 28, some objects were not relevant and could be eliminated in the final environment.
5.3 Common lessons
In general, we obtained the following special and common lessons from the sessions held in each of the collaborative virtual environments:
● For participants who are unfamiliar with the technology, it is necessary to undertake training on each tool, in which clarifications are made to prevent the negative user adaptation incidents.
● The tools used in each session enable a level of immersion and for spaces to be explored, which facilitates an easier and cheaper way of visiting or familiarizing with a space.
● Collaborative environments allow a discussion of the contexts of the projects and requirements.
● The environments of each session enabled the direction of the discussion to focus on specific elements of the environment, such as:
- Define which objects were important for the simulation and their behavior.
- Discuss the dimensions and appearance of scene objects.
- Discuss the spatial distribution of the environment and its elements.
- Discuss missing and irrelevant objects.
- Discuss some aspects of the behavior and importance of each object in the simulation
● The use of a pen laser is very important to direct the discussion to specific elements. However, in Mozilla Hubs, it is necessary to improve the pointer so that it is larger and opaque and evaluate this change to avoid the difficulty in following it either because of occlusion with other participants or because of low contrast with the surroundings. Contrastingly, in AltSpaceVR, it was not possible to find a way to use a laser in custom spaces.
● Although the case studies were projects in the medical and military fields, the lessons noted in this analysis are applicable to any domain, such as agriculture, archaeology, astronomy, chemistry, aerospace engineering, and law, where it is necessary to build a virtual environment similar to the real one and that needs to be validated by all stakeholders. For example, delving into the area of aerospace engineering, a simulation of the interior of an aircraft could be created. Alternatively, in the field of law, a simulator of a court could be made.
Although we do not have quantitative data to test our hypothesis owing to the qualitative nature of our study, these lessons show that VR environments produce several positive outcomes related to the elicitation process of our sample projects. Therefore, we are encouraged to pursue more projects with these techniques and study their benefits further.
6 Conclusions and future work
We present our experience in using VR technologies in the requirement elicitation process of VR simulators and trainers. VR facilitated the study of contextual issues at early stages of the development and allowed stakeholders to identify issues related to position and scale of objects in the simulation. In future, this experience could benefit from novel tools to identify the focus of a stakeholder, and ways to further facilitate the setup for a shared experience. We also plan to incorporate these tools in our new developments and identify ways to overcome the limitations of the environments we use for the shared experiences, for example, evaluating the impact of changing the opacity and size of the Mozilla Hubs laser pen, as described in Section 5.1. Additionally, we will conduct a study using HMDs and compare the differences in experience, and immersion when using these devices. Finally, we consider VR as an important tool in the requirements gathering stage of a VR simulator or trainer.
6.1 Creation process for Mozilla Hubs
The scenario for Session 1 was created as a skybox in Mozilla Hubs. We asked our clients to give us two panoramic pictures of the environment they wanted us to share, in such a way there was an overlap at both ends of the panoramas and they were taken at approximately the same height. From these panoramas, we created the textures for a cube by cutting and adjusted pictures at their seams. We created similar textures where necessary, particularly in the floor and ceiling. Later, we added the skybox in Spoke, the Mozilla Hubs editor, and we added colliders and spawn points for participants. Finally, we published our model and gave the URL to participants prior to our sessions.
6.2 Creation process for AltSpaceVR
The scenario for Session 2 was created as a 3D scene in AltSpaceVR. The scene was prepared in Unity, by using free 3D objects like those required in the simulator, and by creating objects similar to the ones in the pictures of Session 1 if necessary. We exported our 3D scene from Unity to AltSpaceVR with the plugin provided by the latter. Finally, we added a teleport from the AltSpaceVR home of one of the developers to the scene imported from Unity and added all stakeholders as friends so that they could visit the new scene.



Berg L P, Vance J M. Industry use of virtual reality in product design and manufacturing: a survey. Virtual Reality, 2017, 21(1): 1–17 DOI:10.1007/s10055-016-0293-9


Lloyd W J, Rosson M B, Arthur J D. Effectiveness of elicitation techniques in distributed requirements engineering. In: Proceedings of IEEE Joint International Conference on Requirements Engineering. Essen, Germany, IEEE, 2002: 311–318 DOI:10.1109/icre.2002.1048544


Bhimani A. Feasibility of using virtual reality in requirements elicitation process. 2017.


Chow K, Coyiuto C, Nguyen C, Yoon D. Challenges and design considerations for multimodal asynchronous collaboration in VR. Proceedings of the ACM on Human-Computer Interaction, 2019, 3(CSCW): 1–24 DOI:10.1145/3359142


Fraser M, Glover T, Vaghi I, Benford S, Greenhalgh C, Hindmarsh J, Heath C. Revealing the realities of collaborative virtual reality. In: Proceedings of the Third International Conference on Collaborative Virtual Environments-CVE. SanFrancisco, California, USA, NewYork, ACMPress, 2000 DOI:10.1145/351006.351010


González M A, Santos B S N, Vargas A R, Martín-Gutiérrez J, Orihuela A R. Virtual worlds: opportunities and challenges in the 21st century. Procedia Computer Science, 2013, 25: 330–337 DOI:10.1016/j.procs.2013.11.039


Teo T, Lawrence L, Lee G A, Billinghurst M, Adcock M. Mixed reality remote collaboration combining 360 video and 3D reconstruction. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Glasgow Scotland UK, New York, NY, USA, ACM, 2019 DOI:10.1145/3290605.3300431


Piumsomboon T, Lee Y, Lee G, Billinghurst M. CoVAR: a collaborative virtual and augmented reality system for remote collaboration. In: SIGGRAPH Asia 2017 Emerging Technologies. Bangkok Thailand, New York, NY, USA, ACM, 2017 DOI:10.1145/3132818.3132822


Permana R H, Suryani M, Adiningsih D, Paulus E. The storyboard development of virtual reality simulation (VRS) of nursing care in respiratory system disorders course. Indonesian Nursing Journal of Education and Clinic (Injec), 2019, 3(2): 121 DOI:10.24990/injec.v3i2.202


Scaife M, Rogers Y. Informing the design of a virtual environment to support learning in children. International Journal of Human-Computer Studies, 2001, 55(2): 115–143 DOI:10.1006/ijhc.2001.0473


Thalen J P, van der Voort C. User centered methods for gathering VR design tool requirements. In: Proceedings of Joint Virtual Reality Conference of EGVE 2011-The 17th Eurographics Symposium on Virtual Environments, EuroVR 2011-The 8th EuroVR (INTUITION)Conference, 2011 DOI:10.2312/EGVE/JVRC11/075-081


ISO/IEC/IEEE. International standard-systems and software engineering: Life cycle processes: Requirements engineering. ISO/IEC/IEEE 29148: 2018 (E), 2018: 1–104 DOI:10.1109/ieeestd.2018.8559686


Sharp H, Rogers Y, Preece J. Interaction design, 5th edition. Wiley. 2019


Garnica M, Pedraza A, Lovo A, Alvarez J, Brijaldo Cano M, Torres Saenz A, Figueroa P. Prototipo consola de ingeniería en realidad virtual y simulación en el entrenamiento de tripulaciones en procedimientos de emergencia (planta de ingeniería) para unidades tipo de la Flota Naval de la Armada de Colombia. 2020.


Rivera C, Figueroa P, Casas Villate M P, van López J E, Gómez Maldonado E. Realidad virtual aplicada al curso del minuto de oro en reanimación Neonatal desde la atención básica hasta el abordaje de la reanimación avanzada dirigido a personal de salud en regiones apartadas de la geografía nacional. 2020


Mozilla. Mozilla/hubs, GitHub. https://github.com/mozilla/hubs


Gomez V. Forked mozilla hubs-changed pen-laser opacity and radius. Ghttps://github.com/VivianGomez/hubs