Page 105 - Kaleidoscope Academic Conference Proceedings 2021
P. 105

Connecting physical and virtual worlds




                                                              and unfolding so that it avoids the  need for the user  to
            Algorithm 2: Dimension_Scanning                   manually go to the store and try it out.
            Input  first and  second  Hitresults  anchor  from  AR
            Raycasting.
            Create an anchor as hit results does not happen in the same
            frame
              startAnchor =
            session.addAnchor(hitResult.getHitPose());
            Obtain the location of first and second anchors
               Pose startPose = startAnchor.getPose();
                 Pose endPose = hitResult.getHitPose();
            Clean up the anchor since the tracking gets updated
            session.removeAnchors(Collections.singleton(startAnch
            or);
            startAnchor = null;
            Compute the difference vector between the two hit      (a) Bed 1                  (b) Bed 2
            locations
               dx = startPose.tx() - endPose.tx();
               dy = startPose.ty() - endPose.ty();
                dz = startPose.tz() - endPose.tz();
            Compute the straight-line distance.
               distance = (float) Math.sqrt(dx*dx + dy*dy + dz*dz);
            return distance









                                                                  (c) Cupboard                (d) Chair

                                                                   Figure 4 – Sample Home Interior Models in real
                                                                                  world view
             (a) Vertical Plane      (b) Horizontal Plane


                        Figure 3 – Plane detection

           4.1   Spawning virtual models in real scene

           Once the plane is detected, the 3D model to be rendered is
           instantiated and deployed in the real scene. The Lean API
           handles different transformations and gestures as shown in
           Figure 4 where the white icon depicts that the 3D model can
           be captured as an image while  the red dot  indicates  the
           recording action of the interior models while they are stacked
           in the scene which  are stored in the  native gallery  of the
           mobile  device.  Transformations  (scaling  and rotation)
           carried out on the models are shown in Figure 5.         (a) Scale                (b) Rotate

           4.2   Realistic 3D view of the featured models                 Figure 5 – 3D transformations

           It is not enough if users can only view the item from one
           angle, thus the user should be able to calibrate the 3D model
           with the information given to place the item correctly. Users
           can also move around the interiors to get a clearer picture on
           how it would look like from multiple perspectives. Figure 6
           shows how a model would look in real scene during folding







                                                           – 43 –
   100   101   102   103   104   105   106   107   108   109   110