HFVE Logo

HFVE software




The HFVE (Heard & Felt Vision Effects) audiotactile vision information system enables totally blind people to access aspects of visual images.  HFVE software outputs sounds that convey key shapes, and can present corresponding tactile effects on a force-feedback joystick or mouse (or tactile array).   Click here for demo videos.   Click here to try HFVE!

Logitech's Wingman® Force Feedback Mouse, which can move under its own power to trace out key shapes in images

Heard & Felt Effects

The shapes of items within images are presented via apparently-moving speech-like sounds (termed "tracers") which trace out the paths of outlines, center lines, symbolic shapes, etc., with corners emphasized.  The speech describes features such as color, layout, etc.  Coded speech sounds may be less intrusive than standard speech.  An optional non-speech "buzz track" can clarify shapes.  The moving sounds are positioned in stereo soundspace; according to location, and pitched according to the height of the feature they are representing.

In the tactile modality, HFVE software can command a standard force-feedback device (such as Microsoft's Sidewinder® Force Feedback 2 joystick or Logitech's Wingman® Force Feedback Mouse) to move the user's hand along the paths of shapes. (Alternatively apparently-moving tracers can be presented on a tactile array e.g. forehead-, abdomen-, or tongue-placed electro-tactile display.)  Braille can present regular regions, and Morse code-like taps can present visual information.

Diagram illustrating HFVE effects : Tracers (with corners emphasised), Polytracers, & Matrix effects; Imprints; Multi-talker Focus effects; and Layouts Microsoft's SideWinder® Force Feedback 2 joystick, which can move under its own power to trace out key shapes in images

In both modalities the paths followed by the tracers describe the shape, size and location of items.  As the system outputs both audio and tactile effects, users can choose which modality to follow; or both modalities can be used simultaneously.


What can be presented ?

HFVE software can exhibit visual representations such as abstract shapes; still or moving filed images (.GIF, .AVI, etc.); live images; data that can be presented visually; media player items (DVDs etc.); clipboard contents; drag & drop items; or sections of a computer desktop.

HFVE software exhibiting an octagon shape, with corners emphasised HFVE software presenting a symbolic tracer to represent a face detected in a photo HFVE software exhibiting a sine wave sent by an external app HFVE software exhibiting part of a computer's desktop, using the Viewfinder feature
HFVE software exhibiting the layout of a section of a desktop, using the Viewfinder feature

Other apps, which produce shapes and images (e.g. map applications, educational software, certain games, etc.), can send visual representations for HFVE software to exhibit, via a simple interface.


How are Images Exhibited ?

Tracers, corners, and buzz tracks

Symbolic tracer paths Tracer paths that follow the object outline, enclosing rectangle, symbolic path, and medial line

When exhibiting items, the paths followed by the moving tracers are typically the outlines of the items, but alternatives include medial (center) lines, enclosing rectangles, and standard easily-recognised symbolic paths for common "items" such as faces and people.

Corners are emphasized with special effects which help to give a clearer impression of shape.

A "buzz track" gives additional non-speech cues about location and shape, and plays at the same time as the speech-like sounds. Its buzzing sounds are easier to mentally position in soundspace, and allow more accurate perception of shape. Location and direction cues, and timbre-conveyed information, can be included on the buzz track.

"Matrix" effects are produced by dividing the image several equal-width columns and/or rows, so that special effects can be triggered whenever the tracer moves from one such column or row to another, allowing the shape of lines to be perceived more clearly – if the tracer travels at a constant speed, the rate at which the border effects are presented will correspond to the angle of slope.

Polytracers & Regions

Optophone-like multiple-tracer polytracers

Additionally, optophone-like multiple-tracer effects (known as "polytracers") sweep through an area to give an intuitive impression of the layout of item content.

User-controlled regular rectangular "regions", mapped to braille

Furthermore, user-controlled regular rectangular areas (known as "regions") can also be exhibited. These map well to braille. (Item shape and content can also be displayed via braille.)

Imprints

Diagram illustrating presenting image items via Imprints

"Imprints" rapidly and intuitively summarize an image, by conveying the approximate extent of items via multiple stationary voices, or via non-speech effects. The effect of the voices presenting successive items can give the impression of the items being "stamped out" or "printed".


Using Imprints with other effects

Imprints can be presented in conjunction with other effects, such as shape-conveying buzz-track tracers, and optophone-like polytracer effects.

Blind users may not need to know the exact size, shape and location of each item - the approximate size and extent presented by the Imprint is sufficient. However users can command the system to lock on to an item when it is presented, in order to obtain the exact shape etc. of the item.


Multi-talker Focus effects

Multi-level multi-talker "Focus" effects allow several properties and items to be presented and investigated at the same time. The system presents the items that are currently the primary focus of attention via crisp non-modified sounds, for example via speech sounds. At the same time the system presents the speech sounds for items that are not at the focus of attention, but applies a distinct differentiating effect on them, for example by changing the type of voice (e.g. monotone, or with intonation), or by applying echo or reverberation effects.

Multi-talker focus effects, and effect relocation The system can artificially separate the presented items (B), so that the "cocktail party effect" is maximized.

This helps users to focus their auditory attention on the item emphasized by the system, or switch their attention to another item that is also presented but not emphasized. They can then cause the system to highlight that other item instead.

For the illustrated spreadsheet, the pointer is over a particular cell, but is also over a column of cells, a row of cells, a block of cells, and the spreadsheet. The user’s focus of attention can be drawn towards any one of these spreadsheet items (cell, column, row, block etc.) while at the same time the user can be made aware of the other co-located items, which are at different levels of view.

Items in a spreadsheet at different levels of view A blind user can rapidly navigate between such levels, for example by using a mouse wheel or a dial device, while hearing the focus effects speaking the level (e.g. cell, column, row, or block) that is currently emphasized, and at the same time being made aware of the levels above and below the current level of view, which have distinguishing effects applied.

Navigating with locked-on items, and freely exploring

At any moment the user can lock on the item being presented. When an item is locked on, and the user moves the pointer within the area of the item, typically the items at lower- (and/or higher-) levels than the locked item are presented, so that the user can be aware of items in adjacent levels (or items nearby on the same level), and can switch to being locked on one of them instead. Alternatively the system can step around the lower-level items within the locked-on higher-level item, and the user can at any time lock on the item being presented.

The method of user interaction can also be “exploring” in style, using a moving pointer.  Click here for demo videos.  One option is for a different voice to start when a new item is to be announced (optionally with a different persona), but with the earlier voice continuing on at a reduced volume level, being reduced further with each subsequent item (the previous voices can also be moved to the side to keep them distinct from the new main voice). This may produce a less abrupt effect on change of announced item, with the previous voices gradually fading away.

Computer Vision

Detected areas and directions of motion

HFVE software includes facilities for identifying objects within images. Optical processing techniques are used to present areas containing particular colors (known as "blobs"), and computer vision object-recognition techniques work well for certain items (such as people's faces).

Tracking a moving hand

HFVE software can also highlight areas and directions of motion in a live or moving image - the shape of the area of movement can be detected and exhibited; or a special tracking mode can be selected, whereupon the area of motion is followed, with polytracer-like effects continuously conveying the movement. Other types of item e.g. faces, or selected areas, can also be tracked.

Open source Tesseract OCR is used for text recognition, and open source OpenCV is used for face and motion detection.

Human figures assumed from detected faces Person detection is less straightforward to achieve in arbitrary situations, but the system can optionally use the simple approach of assuming that any face has a person's body below it, and that the size and distance of the assumed person is related to the size of the detected face, with nearer assumed persons overlapping further away assumed persons. (If similar-sized faces are detected close together, then the higher-located faces will typically be for persons further away than the lower-located faces.)

The illustration shows this process in action, with 5 faces detected, and with the resultant assumed figures overlapped appropriately.

The resultant figures can then be presented using the effects, for example as imprints, or symbolic tracers.

Creating Audiotactile Images

Replaying a drawn audiotactile image

Blind people can draw shapes and items onto an image or blank background via keyboard, joystick, touch-screen, or speech - these can then be replayed using standard HFVE conventions.

A regular computer mouse can also be used - stereophonic buzz track-like sounds give continuous feedback about the location of the mouse pointer, whose path is mapped onto the drawing canvas. When users move the mouse (or joystick) in a certain path, the sounds they hear will be similar to those produced by the moving tracers when the shape is replayed.


Prepared Guides

Image manually marked-up with items

Although HFVE software can automatically identify and highlight some items in images, for certain applications images can be prepared by a designer (e.g. a teacher), who can produce a guide which specifies the items to be presented, their extent, importance, etc.


Presenting Data, Graphs and Charts

Audiotactile line graph

Data can be presented in the form of audiotactile shapes that are similar to standard graphs and charts, such as line graphs or bar charts. Line graphs have previously been presented by using optophone-like audio mapping. However by using audiotactile tracers that can move in any direction, shapes that resemble other graph and chart types (e.g. pie charts) can also be traced out with audiotactile effects.

Audiotactile pie chart

One difference from general image presentation is that the special effects, which normally highlight corners, can be used to represent data points. The apparent speed of travel and sound timbre can change on change of section. Several rows of data can be presented.

Audiotactile tracer following the path of a Fourier series waveform

HFVE software can also present audiotactile waveform shapes that are defined in a spreadsheet. Several waves can be combined, and the resultant waveform presented. Blind people can create audiotactile graphs, charts and waveforms, and can experience the effect on the waveform shape of changing parameters.



System Requirements

HFVE software runs on Microsoft Windows® and can run virtualized on other OSs.

HFVE software GUI in compact mode

HFVE software development is nearing completion.  Contact HFVE  for more information.



 Demo videos

Pentagon shape presented by HFVE, and drawn by user (51s) Shape presented via joystick and force-feedback mouse (22s)
Pentagon.mp4 Tactile.mp4
Pentagon.wmv Tactile.wmv
HFVE OBJECTS - several tracer types; polytracers; and tracked items (41s) HFVE REGIONS - regular rectangular areas, moved and zoomed (49s)
Objects.mp4 Regions.mp4
Objects.wmv Regions.wmv
HFVE IMPRINTS stepping round the colors in a scene (55s) HFVE LOOKING for red items; and tracking some of them (52s)
Imprints.mp4     Bubbles Looking.mp4
Imprints.wmv     Bubbles Looking.wmv
HFVE SHAPE and GRAPH tracers, controlled via an external app (63s) EXPERIMENTAL - several pre-defined movie non-speech item effects (70s)
Graphs.mp4 Movie.mp4
Graphs.wmv Movie.wmv
CLIENT-SOURCED ENTITY, showing the Dewey-Decimal® levels of view, locking, etc. (81s) HFVE MULTI-LEVEL EFFECTS, showing image levels of view, item locking, etc. (58s)
Dewey.mp4 Levels.mp4
Dewey.wmv Levels.wmv
HFVE MULTI-TALKER FOCUS EFFECTS, showing road map item locking, echo effects, etc. (115s)
Map.mp4
Map.wmv


 Home   Software   Research   Try HFVE!   About 

Copyright © 2023 by David C. Dewhurst.  All rights reserved.  V7.41.  Updated 06-DEC-2023.