This is a set of demonstration mechanics to improve typical supermarket applications. There are many requests to bring augmented reality technologies to the supermarket industry. We will consider two use cases.
Demonstration of product information.
After scanning a barcode, the system identifies the product by a unique code. The application makes a request to the API from the database and receives information about the product: product description, ingredients, price, and photo. The information can be very convenient, as it is not written in small letters in the corner of the package, but is structured by category. The product photo acts as a ar-marker against which the tile with additional information is built. The information panel moves and rotates with the product.
Pros:
- The product already has an identifier - barcode.
- The user can quickly find out the price in your store by scanning the product.
Cons:
- A photo taken from one angle acts as a marker, so tracking packaging with non-flat shapes will not work well. An alternative is 3D object tracking (it requires scanning each product separately, testing and using such models).
- When the product changes slightly in design, you need to take new photos of the markers.
Indoor navigation.
The starting point for in-store navigation is the qr-code. This code identifies the store in the network and loads the required navigation model. We are referring to the interaction of the API server of the store network using the identifier obtained from the qr-code. Such codes can be used inside stores for advertising purposes. For example, a user sees a promotion for the purchase of perfume. User scans the qr-code in the corner of the promotional poster and is shown the way to the department where he can find this product, relative to his current position.
The simplest example is the display of a three-dimensional map (3D model) of a store.
In this case, there is no full implementation of augmented reality. The user scans the code and views a three-dimensional map of the store, where departments selling different goods are labeled.
A more complicated example is indoor navigation relative to anchor points.
When the user scans the qr-code, a virtual 3D model of the room (map) is superimposed relative to the location of the qr-code. Each new scan shifts the anchor point relative to the new position of the qr code. The app can draw a line to the target point. There should be several such codes, since each deviation of tracking by several degrees significantly shifts the virtual navigation model. This model is not suitable for large rooms and requires the user to move the phone smoothly throughout the movement.
The best example uses the Area Targets model.
Read about Vuforia Area targets.
In this example, the system navigates in space based on a model taken with a special camera/technology/application. The virtual model for augmented reality is formed on the basis of geometry and textures captured under special conditions. Under such conditions, the system has the best AR-orientation quality. The model has a lot of data (the size of the downloaded data, in our case, is 200 MB), and it needs to be updated whenever there are changes in the interior.