Skip to content
Lucyd WholesaleLucyd Wholesale
Our Ergonomic Advantage

Our Ergonomic Advantage

Our Ergonomic Advantage

Creating a successful AR display requires attention to both the interface as well as the underlying technology. From an ergonomics perspective, Lucyd seeks to create a transparent interface1 whereby the user utilizes our glasses as a natural extension of their body and senses. The device in a sense disappears, and benefit and purpose of it is embraced as an extension of the body. This basic design principle has been embraced by all types of products from hand tools like Black & Decker’s De Walt drill, Logitech’s ergonomic mice, all of the successful smart phones, chairs, beds, surgical instruments and a host of other products. In the case of smartglasses, this is even more important as the glasses seek to transcend visual displays, while creating a window to the world of real and virtual objects.

Ergonomics & Optics

One of thirst things that needs to be addressed is the size and weight of the glasses. Thankfully there is a widely adopted standard for this in the form of regular spectacles. In addition, as approximately 75%2 of population use corrective lenses AR glasses need to be able to incorporate corrective lenses to ensure user comfort and personal style preferences. The utilization of free-form optics design methodology, enables the use of off-axis, non-rotationally symmetric geometries that can enable custom, stylistic designs including small eyeglass formats that allow for the incorporation of prescription lenses.

Additionally, freeform optics in Lucyd Lens enables optimization of field of view, resolution, depth of field and the minimization of image field distortion, such as blur & warping from astigmatism & coma.

To keep the weight and size of Lucyd Lens to a minimum there is a need to reduce the number of discrete optical elements required in the system design. This has the added benefit of minimizing power requirements to extend battery life for a given geometry.

The optical design will require the integration of high spatial resolution micro-displays with low light loss optical see-through retro-reflective surfaces. This reduces stray light and light loss within the AR headset optics, enabling the use of low brightness/low power displays.

Further, the optical system needs to approach a field of view of approximately 114 degrees3, which is what one sees without AR glasses. For a user friendly interface is it critically important not to restrict vision. To a certain extent this can be aided with eye tracking with micro-illuminators to enable focal lengths that can be adjusted dynamically to accommodate changes in resolution. In addition, gaze tracking can be used to summon information/interactivity on demand and reduce cognitive demands of the user.

The displayed information should be visible across the full scope of vision. Human vision has roughly 200×100 degrees FOV, but state-of-the-art headsets based on waveguide-based displays usually achieve less than 40×40 degrees. Curved displays based on birdbath optics achieve larger FOV (90×90 degrees), but over 80% of the projected light of the digital content is defenestrated due to the semi-transparency of the beam splitter and the combiner. These solutions also tend to create lower fidelity virtual imageries and lower image sharpness. Emerging technologies such as Pinlight displays, which use an array of defocused point light sources that act as tiny projectors, can provide much wider FOV and a non-encumbering systems design, but are limited by extremely low resolution. Based on state-of-the-art research in near-eye see-through display technologies, the trend is towards waveguide displays combined with computational optics components, as well as hybrid solutions.

From the software lens, AR glasses need to be able to distinguish between real and virtual objects to avoid unrealistic occlusion effects between the two so as not to create depth perception and accuracy issues.

Mechanical User Interface Gives Way to Voice and Gestures

Current augmented reality headsets only support basic hand gestures (tap, click, swipe, etc.) and controllers. Natural interactions with 3D assets are still inaccurate due to limited hand tracking solutions, which will certainly improve significantly in the near future due to recent advances in deep-learning based performance capture techniques combined  with integrated real-time depth sensors. We believe that next generation headsets will incorporate highly sophisticated speech recognition and interaction capabilities with Google voice and Siri. Interaction with these AI tools should be hands-free in terms of switches and controls, with voice and bone conduction speakers. Future systems should also be able to recognize and classify objects that are seen.  If hands are to be used, they should be used as gestures, with recognizers and advanced image recognition algorithms. One important missing feature for existing AR devices is a practical virtual keyboard interface as depicted below. Old habits are replaced, but very slowly.

Wireless Charging

In analogy to modern smartphones, consumer-ready AR headsets need to support a long battery life as we expect them to be used extensively, they should not have any tethers as they need to be deployed in any indoor/outdoor environment. AR glasses should use the same wireless inductive chargers that repower existing smart phones.  To extend battery life and minimize the need for within day re-charging, similar to a smart watch, it will be necessary to use low power displays.

References

Acknowledgement

The author would like to express his sincere thanks to Dr.’s Michael Kayat, Hao Li and Chris Harrison whose input for this short summary has proven invaluable.

 

Cart 0

Your cart is currently empty.

Start Shopping