DIY assistive technology prototypes that can help those with disabilities to communicate

0
1171

Researchers at the University of Maryland, Baltimore County (UMBC) have developed do-it-yourself (DIY) assistive technology prototypes that are revolutionizing how people with disabilities can access tools that will help them interact with the world.

Foad Hamidi, assistant professor of information systems, and his collaborators at York University in Canada and the Pamoja Community Based Organization in Kenya have created research-based assistive technology platforms for people with different abilities and in different cultural contexts to learn how to use simple computers to communicate.

Importantly, the development of platform prototypes has been grounded in close collaboration among researchers and community members in Kenya and the U.S.

The Institute of Electrical and Electronics Engineers (IEEE) has published the results in IEEE Pervasive Computing.

In the field of assistive technology, costs often prohibit many people with disabilities and their families from accessing useful communication technologies.

Existing tools that facilitate communication are especially hard to individualize and can be costly, explains Hamidi.

However, computers have steadily become less expensive to distribute and easier to use. This makes computer-based assistive technologies more accessible to people with disabilities, both inside and outside of the U.S.

Hamidi and his team have worked to develop and test two platforms: SenseBox and TalkBox.

These platforms are open source and only require a Raspberry Pi (an inexpensive microcomputer), low-cost sensors, and a speaker to operate.

TalkBox allows users to communicate by touching images on an attached surface to play audio files stored within the system.

The images and sounds can be customized during assembly, depending on an individual’s unique needs. For example, TalkBox can be adapted to fit on a wheelchair, and it can include individualized visual elements.

The TalkBox could display illustrations of faces showing different expressions, which a student could use to express an emotion. Numerous adjustments are available to the user, making the technology extremely customizable.

SenseBox relies on a similar model of stimuli being translated into audio, but it operates using tactile objects, which are recognized by sensors.

These tactile objects are embedded with radio frequency identification (RFID) tags, similar to how objects are tagged in stores. The objects can be 3-D printed, which permits extensive customization.

TalkBox was successfully used in Kenya by a special education teacher who was able to input the names of all of his students onto the device to be used in class.

This application of the device led to a noticeable increase in participation and inclusion. The success of the tool within that classroom has already led to an increased interest in the technology for other potential stakeholders in Kenya.

The researchers hope to work with community members in Kenyan universities and healthcare facilities to expand the availability of this tool, and help stakeholders learn how to use it.

This shows a raspberry pi

Hamidi and his team have worked to develop and test two platforms: SenseBox and TalkBox. These platforms are open source and only require a Raspberry Pi (an inexpensive microcomputer), low-cost sensors, and a speaker to operate. Image is in the public domain.

In the U.S., SenseBox was used by a speech-language pathologist and a nonverbal client with low vision and autism spectrum disorder.

The client was able to play his favorite music by holding the desired CD case to the device, which was a major stepping stone in his communication.

Previously, he had difficulty using other devices to achieve this same goal of playing his favorite artist.

The success of these DIY devices rests on the fact that people with limited experience using technology can quickly learn how to use the tools and teach others how to use them.

Hamidi and his research partners see their close collaboration with those who will be using TalkBox and SenseBox as essential to ensuring the tools are tailored to meet their needs.

The researchers continue to explore how best they can scale up the use of these new tools to support people with disabilities who are seeking new ways to communicate in a broad range of cultural contexts.


Communication support is defined broadly as “anything that improves access to or participation in communication, events, or activities. Support includes strategies, materials, or resources that are used by people with impairments or by others who communicate with people with impairments.

It involves modifications in the environment around the person with impairments or modifications to activities in which people engage. It also includes supportive attitudes that foster communication participation.

Finally, support includes policies and practices of agencies and institutions that foster communication success” (page 9) (King, Simmons-Mackie, & Beukelman, 2013).

According to Simmons-Mackie (2013), there are key assumptions that justify intervention with communication supports. While these foundational concepts are described within a chronic aphasia context, they resonate for patients with neurodegenerative disease as well:

(1) The ultimate goal of all treatment is to enhance participation in communicative life. Regardless of the stage of the neurodegenerative process, the patient and his communication partners can set goals that achieve meaningful outcomes.

(2) Communication is a collaborative enterprise. Since meaning is negotiated between and among participants, those with communication challenges and their partners must develop strategies and resources to send and receive messages successfully.

(3) Communication support is an ethical issue. It is the responsibility of the interventionist to identify and establish any method, strategy or resource that might help a patient communicate more successfully.

The National Joint Committee for the Communication Needs of Persons with Severe Disabilities presents a Communication Bill of Rights (Table 1) that clearly states that all people with disabilities, including those with severe speech and language impairment secondary to neurodegenerative disease, have a basic right to affect, through communication, the conditions of their existence (National Joint Committee for the Communication Needs of Persons with Severe Disabilities,1992).

Table 1:

NJC communication bill of rights

Each person has the right to:
• request desired objects, actions, events and people
• refuse undesired objects, actions, or events
• express personal preferences and feelings
• be offered choices and alternatives
• reject offered choices
• request and receive another person’s attention and interaction
• ask for and receive information about changes in routine and environment
• receive intervention to improve communication skills
• receive a response to any communication, whether or not the responder can fulfill the request
• have access to AAC (augmentative and alternative communication) and other AT (assistive technology) services and devices at all times
• have AAC and other AT devices that function properly at all times
• be in environments that promote one’s communication as a full partner with other people, including peers
• be spoken to with respect and courtesy
• be spoken to directly and not be spoken for or talked about in the third person while present
• have clear, meaningful and culturally and linguistically appropriate communications

From the National Joint Committee for the Communicative Needs of Persons with Severe Disabilities. (1992). Guidelines for meeting the communication needs of persons with severe disabilities. ASHA, 34(Suppl. 7), 2–3.

The World Health Organization’s International Classification of Functioning, Disability and Health (ICF) (World Health Organization, 2001) provides a useful framework for AAC intervention, which is delivered at the participation level rather than the impairment level of disability (Worrall & Frattali, 2000).

The ICF defines participation as “involvement in a life situation” (page 123), and places activities and participation, environmental barriers and facilitators, personal factors, as well as body function and structure within a model of health conditions. Borrowing from an aphasia framework again (Kagan et al., 2008), the A-FROM (Aphasia: Framework for Outcome Measurement) presents a heuristic that has been adapted from the ICF to increase relevance to communication disorders in a clinically friendly format.

Within the participation framework, the focus of communication intervention shifts from an impairment- or restoration-based approach to one that emphasizes compensation for lost function, with reliance on AAC (Fried-Oken, Rowland, & Gibbons, 2010).

For example, instead of improving speech intelligibility through drill and practice exercises, AAC would compensate for a speech impairment with tools, environmental and partner adaptations, and behavioral changes.

Rather than working on goals to return patients to their previous levels of functioning, AAC provides ways to remain engaged in daily activities with alternative compensatory approaches or durable medical equipment.

An on-screen keyboard and joystick, for instance, might be provided for a person with limited upper extremity skills for typing, permitting computer use with alternative writing access methods. AAC encompasses a variety of strategies, techniques, and devices, ranging from simple yes/no eye blinks to sophisticated computer-based systems and speech-generating devices.

For patients with neurodegenerative disease who present at different stages of communication impairment, these supports initially will facilitate and maintain participation in daily activities. A mechanism must be in place to reevaluate and adjust communication supports over time as needs and skills change.

Acceptance of multiple communication options by the patient and his/her family, as well as the early inclusion of communication partners in all aspects of treatment, are critical elements that are likely to ensure AAC acceptance and successful outcomes.

No-tech AAC

No-tech or unaided AAC refers to any natural form of communication that uses the human body, with no other equipment required (Vanderheiden & Yoder, 1986). Examples include vocalizations, tongue clicks, eye movements and blinks, and gestures.

One technique often used by patients with intact upper extremity function is writing letters in the air (Fried-Oken, Howard, & Stewart, 1991).

Another approach called partner-assisted scanning involves a communication partner reciting aloud the letters of the alphabet or a list of messages, waiting for a signal (e.g. eye blink, eye movement or vocalization) to indicate the desired option (Bauby, 1998).

While formal sign languages such as American Sign Language may also be considered no-tech AAC, they are not often used with people with degenerative conditions due to the time and effort required for both the individual and communication partners to learn a new language.

To aid caregivers in the consistent interpretation of communicative movements, facial expressions, and sounds, a gesture dictionary may be constructed that describes an individual’s gestures and pairs them with their associated meanings (i.e. throat clearing means ‘I need ice chips’).

Low-tech AAC

Low-tech AAC involves the use of non-computer based equipment, from pen and paper to alphabet boards (Wu & Voda, 1985), communication books and simple alerting systems. Individuals experiencing challenges with speech intelligibility can write messages or draw pictures to communicate intent (Lasker, Hux, Garrett, Moncrief, & Eischeid, 1997).

Similarly, communication partners can support language expression by using a written choice strategy. During conversation, if a person is unable to respond verbally, the partner writes down possible responses.

The person with the communication impairment then can indicate his choice by pointing to the selected word (Lasker et al., 1997). Alternatively, communication partners can enhance comprehension by supplementing spoken language with gestures, written words or phrases, drawings, or diagrams.

This technique, termed augmented input, occurs dynamically during conversation, providing an effective low-tech communication support (Ball & Lasker, 2013Wallace, Dietz, Hux, & Weissling, 2012). Communication books and boards may be developed that are text-based (with letters, words, or whole sentences), symbol-based (with photos or drawings representing topics and messages), or a combination of the two, and should be customized to each individual’s personal needs and interests (Khayum, Wieneke, Rogalski, Robinson, & O’Hara, 2012).

Communication boards and books are commonly used with direct selection, where the patient indicates the desired items with an anatomical pointer or device (e.g. hand, finger, head or chin stick, stylus, or laser pointer).

Some communication boards are designed for use with eye movements (e.g. ETRAN) (Goosens’ & Crain, 1987), or as visual supports for the partner-assisted scanning method described above. Appropriate size, format, selection method, text, and symbols must be considered to personalize the low-tech options based on assessment results.

When natural speech is still a viable option, writing and alphabet boards can augment intelligibility. Traditionally referred to as supplementation strategies, this definition is currently expanding beyond alphabet, topic and gestural supplementation to include augmenting speech with pictures via mobile technology and conversation management strategies (Hanson, Beukelman, & Yorkston, 2013).

Individuals with severe dysarthria benefit from pointing to written topic cues or letters on an alphabet board to clarify speech productions (Hustad, Jones, & Dailey, 2003).

A technique called alphabet supplementation or first-letter pointing uses an alphabet board to improve speech intelligibility and has been found to increase intelligibility by 5 to 69%, with greater improvements for those with more severe dysarthria (Hanson, Yorkston, & Beukelman, 2004Hanson, Beukelman, Heidemann, & Shutts-Johnson, 2010). The speaker points to the first letter of each word on the alphabet board as he says it, which slows down speech, creates pauses between words, and provides additional cues to the listener.

High-tech AAC: Speech-generating devices

A speech-generating device (SGD) is an electronic AAC system that allows the user to type or select a message that is spoken aloud. When considering an SGD for a patient with motor speech, language or cognitive impairments, at least four features must be examined:

(1) the technology that will house the SGD;

(2) the symbols to represent language on the machine (either letters for spelling, photos or pictures, or a combination of symbols);

(3) the access method or means to select language on the device; and

(4) the output method or type of speech that will be generated (Fishman, 1987).

SGDs are either ‘dedicated’ and function solely for AAC, or they are ‘integrated’, with access to AAC and to other computer applications and functions. Most SGDs presently are built on general technology platforms, either on a laptop or on a touchscreen tablet that is placed into a custom-built box. Communication apps are very popular and exist to turn a standard tablet computer or smartphone into an SGD. A list of apps for AAC can be found at www.janefarrall.com.

The size, portability, durability, capacity, and flexibility of the SGD must be considered as individuals with neurodegenerative diagnoses change their physical and communication needs over the natural course of the disease.

Speech output may be either digitized (recordings of natural speech) or synthesized (a computer-generated voice that uses text-to-speech software to convert a typed message into speech) (Fishman, 1987).

Digitized messages can be recorded by the user while intelligibility is still adequate, or by another speaker with a similar-sounding voice or the same gender. Messages that are produced with digitized speech must be determined in advance. Synthesized speech offers the advantage of allowing the user to produce novel messages, although current synthesized voices lack natural inflection, intonation, and the ability to express emotion. Individuals who know in advance that they may lose their speech can record phrases in their own voices for eventual use on an SGD.

This process is known as message banking (Costello & Dimery, 2014Costello, 2014Santiago & Costello, 2013). A similar process called voice banking is used to create customized synthetic speech based on the user’s own voice. One reliable voice banking system, named Model Talker, has been implemented in a number of current software programs for text-to-speech applications (Bunnell, Lilley, Pennington, Moyers, & Polikoff, 2010Yarrington, Pennington, Gray, & Bunnell, 2005Yarrington et al., 2008).

The term ‘access method’ refers to the way the user produces messages on an SGD. SGDs can be adapted for access by individuals with a variety of physical abilities, including those who are unable to type on a keyboard or touch screen.

Movements of the hands, feet, head, or even the eyes can be used to control a computer cursor (Fager, Beukelman, FriedOken, Jakobs, & Baker, 2012), and switches can harness even the smallest muscle movements to make selections as the computer scans through available options (Fishman, 1987).

Brain-computer interface systems will one day allow individuals with little or no voluntary muscle activity, such as those with total locked-in syndrome, to control an SGD using only their brain activity (Fager et al., 2012).

Dedicated SGDs produced by AAC manufacturers, along with accessories for access and mounting, are covered by Medicare, Medicaid, and most private insurance providers. An evaluation by an SLP and a physician’s prescription are required.

Communication partners

Conversation partner inclusion is a key component of all AAC interventions (American Speech-Language Hearing Association, 2005). Since communication is not a solitary activity, the behavior and attitudes of communication partners influence the success of AAC use (Scherer, Jutai, Fuhrer, Demers, & Deruyter, 2007Smith & Connolly, 2008).

Interacting with a person who has a language, cognitive, or speech impairment places a novel set of demands on the communication partner, especially if the partners have been lifelong conversants before symptoms developed (Chapey et al., 2001).

One of the standards of care in optimizing communication for people who are losing natural speech is finding a way to improve the communication effectiveness of various partners (Ball & Lasker, 2013Ball, Fager, & Fried-Oken, 2012).

Effective partners understand turn taking and engage in balanced conversations, ask questions but also share in topic shifts, and co-construct messages with a range of communication supports (Thiessen & Beukelman, 2013). One way to determine the role of each communication partner is to place them within a social network.

The Social Networks Inventory (Blackstone & Hunt-Berg, 2003) was developed for this purpose within the AAC field, and provides a framework for delineating personal goals for each patient-partner dyad.

Communication partner training is a well-established, evidence-based intervention for chronic aphasia (Lyon et al., 1997Simmons-Mackie, Raymer, Armstrong, Holland, & Cherney, 2010), and has been emphasized for AAC.

Training refers to formal instruction as well as opportunities to practice communication supports in a variety of environments with those who need AAC (Thiessen & Beukelman, 2013).

Partner training must focus on enhancing interactions by determining the optimal qualities of partner behaviors that support verbal engagement, permitting an individual to maintain independence and participate in daily activities (Kagan, Black, Duchan, Simmons-Mackie, & Square, 2001Kent-Walsh & McNaughton, 2005Simmons-Mackie et al., 2010). Binger and colleagues (2012) delineate effective roles for AAC stakeholders, and create a model for instruction and preparation of communication partners. Critical issues that must be addressed include managing partner attitudes towards AAC technology, the establishment of priorities for social engagement, and the preservation of the AAC user’s roles.

Conversation partner training must continue to evolve throughout the course of the disease, shifting to match the patient’s changing needs and abilities. As impairments worsen, partners take on increased responsibility to assist with communication (Kagan, 1998Kagan et al., 2001). The timing of intervention and the introduction of new communication supports are two fundamental principles that remain critical when integrating partners into communication management.

Communication supports for patients with progressive speech and motor impairments

Symptomology

Approximately 80 to 96% of people with ALS will become unable to meet their communication needs through natural speech at some point during the disease progression (Beukelman, Ball, & Pattee, 2004Sitver & Kraat, 1982).

Like other aspects of ALS, communication difficulties vary significantly from person to person (Hanson, Yorkston, & Britton, 2011). A person with ALS may present with a mixed flaccid-spastic dysarthria that is characterized by impaired articulation, slowed speech, reduced vocal loudness, rough or breathy voice quality, hypernasality, fatigue or shortness of breath with speech, reduced utterance length due to impaired breath support, or a combination of any of the above (Ball, Beukelman, & Bardach, 2007Darley, Aronson, & Brown, 1969Kuhnlein et al., ¨ 2008).

While these symptoms are always progressive, the rate of change varies. ALS is often associated with cognitive changes, ranging from mild impairment to frontotemporal dementia (Goldstein & Abrahams, 2013Lomen-Hoerth et al., 2003Neary, Snowden, & Mann, 2000), or with language impairments including semantic dementia or PPA (Ball et al., 2007Taylor et al., 2013).

Individuals with advanced ALS who elect to undergo tracheotomy and receive mechanical ventilation may progress to a locked-in state (Hayashi, Kato, & Kawada, 1991Hayashi & Oppenheimer, 2003). Individuals with classic locked-in syndrome (LIS) have lost all voluntary muscle function aside from blinking and limited eye movement (Bauer, Gerstenbrand, & Rumpl, 1979Murguialday et al., 2011).

In total LIS, even eye and eyelid movements are lost (Bauer et al., 1979Murguialday et al., 2011), along with any possibility for communication through movement-based signals. Novel techniques are being developed to address the communication needs of individuals with classic and total LIS, as medical interventions evolve for this clinical group (Beaudoin & De Serres, 2008Casanova, Lazzari, Lotta, & Mazzucchi, 2003Doble, Haig, Anderson, & Katz, 2003Schjolberg & Sunnerhagen, 2012).


Source:
University of Maryland Baltimore County

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.