close
close

The tech industry’s accessibility-related products and launches this week | engaged

The tech industry’s accessibility-related products and launches this week | engaged

Every third Thursday in May, the world commemorates Global Accessibility Awareness Day, or GAAD. And as has become common practice in recent years, major tech companies are taking advantage of this week to share their latest accessibility-focused products. From Apple and Google to Webex and Adobe, the industry’s biggest players have launched new features to make their products easier to use. Here’s a quick recap of this week’s GAAD news.

Apple’s launches and updates

First up: Apple. The company actually had a huge batch of updates to share, which makes sense since it typically releases most of its accessibility-focused news around this time each year. For 2023, Apple is introducing Assistive Access, an accessibility setting that, when enabled, changes the home screen for iPhone and iPad to a layout with fewer distractions and icons. You can choose from a row or grid based layout, the latter of which would result in a 2×3 arrangement of large icons. You can decide what these are, and most of Apple’s first-party apps can be used here.

The icons themselves are larger than usual, with high-contrast labels that make them easier to read. When you tap on an app, a back button appears at the bottom for easier navigation. Assistive Access also includes a new Calls app that combines phone and FaceTime features into one tailored experience. Messages, Camera, Photos, and Music have also been tweaked for the simpler interface, and all have high-contrast buttons, large text labels, and tools that, according to Apple, “help trusted supporters tailor the experience to the person they’re supporting.” The goal is to offer a less distracting or confusing system for those who find the typical iOS interface overwhelming.

Apple also launched Live Speech this week, which works on iPhone, iPad, and Mac. It allows users to type what they want to say and have the device read it aloud. It works not only for face-to-face calls, but also for phone and FaceTime calls. You can also create shortcuts for phrases you use often, such as “Hello, can I have a tall vanilla latte?” or “Excuse me, where’s the bathroom?” The company also introduced Personal Voice, which lets you create a digital voice that sounds like your own. This can be helpful for people who are at risk of losing their ability to speak due to conditions that can affect their voice. The setup process involves “reading next to random text prompts for about 15 minutes on iPhone or iPad.”

See also  Western Hockey League's Winnipeg Ice franchise moving to Wenatchee, Wash.

For people with visual impairments, Apple is adding a new Point and Speak feature to Detection Mode in Magnifier. This uses an iPhone or iPad’s camera, a LiDAR scanner and on-device machine learning to understand where a person has placed their finger and scans the target area for words, before reading them back to the user. For example, if you hold up your phone and point at different parts of the controls on a microwave or washing machine, the system will tell you what the labels are, such as “Add 30 seconds,” “Defrost,” or “Start.”

The company made a slew of other smaller announcements this week, including updates that allow Macs to link directly to Made-for-iPhone hearing aids, as well as phonetic suggestions for text editing in voice typing.

Google’s new accessibility tools

Meanwhile, Google is introducing a new Visual Question and Answer (or VQA) tool in the Lookout app that uses AI to answer follow-up questions about images. The company’s accessibility leader and senior director of Products For All Eve Andersson told Engadget in an interview that VQA is the result of a collaboration between the inclusion and DeepMind teams.

Google

To use VQA, open Lookout and launch Pictures mode to scan an image. After the app tells you what’s happening in the scene, you can request follow-ups to gather more details. For example, if Lookout said the image represents a family having a picnic, you could ask what time it is and if there are trees around them. This allows the user to determine how much information they want from a photo, rather than being limited to an initial description.

It is often difficult to figure out how much detail to include in an image description, as you want to provide enough to be useful, but not so much as to overwhelm the user. For example, “What is the right amount of detail to give to our users in Lookout?” Andersson said. “You never really know what they want.” Andersson added that AI can help provide the context for why someone is asking for a description or more information and deliver the right information.

See also  Bitcoin production growth and capital strategy as a guide for Marathon Digital: CEO

When it launches in the fall, VQA will give the user a way to decide when to ask for more and when they’ve learned enough. Of course, since it’s powered by AI, the data generated may not be accurate, so there’s no guarantee that this tool will work perfectly, but it’s an interesting approach that puts the power in users’ hands.

Google is also expanding Live Captions to work in French, Italian and German later this year, bringing the wheelchair-friendly labels for places in Maps to more people around the world.

Microsoft, Samsung, Adobe and more

Many more companies had news to share this week, including Adobe, which is rolling out a feature that uses AI to automate the process of generating tags for PDFs, making them friendlier to screen readers. This uses Adobe’s Sensei AI and also indicates the correct reading order. Since this could really speed up the process of tagging PDFs, people and organizations could potentially use the tool to search through stocks of old documents to make them more accessible. Adobe is also launching a PDF Accessibility Checker to “enable large organizations to quickly and efficiently evaluate the accessibility of existing PDFs at scale.”

Microsoft also had some minor updates to share, particularly around Xbox. It has added new accessibility settings to the Xbox app on PC, including options to turn off background images and turn off animations, so users can reduce potentially distracting, confusing, or triggering components. The company also expanded its support pages and added accessibility filters to its online store to make it easier to find optimized games.

Meanwhile, Samsung announced this week that it’s adding two new levels of ambient noise settings to the Galaxy Buds 2 Pro, bringing the total number of options to five. This would give those using the earbuds to listen to their surroundings more control over how loud they want the sounds. They can also select different settings for individual ears, as well as choose brightness levels and create custom profiles for their hearing.

We also learned that Cisco, the company behind the Webex video conferencing software, is partnering with speech recognition company VoiceITT to add transcripts that better support people with non-standard speech. This builds on Webex’s existing live translation feature and uses VoiceITT’s AI to familiarize themselves with a person’s speech patterns to better understand what they want to communicate. It then captures and transcribes what is said, and the captions appear in a chat bar during conversations.

See also  Siwan Lillicrap: Ex-Wales captain calls time on playing career

Finally, we also saw Mozilla announce that Firefox 113 would be more accessible by improving the screen reader experience, while Netflix unveiled a smashing movie showcasing some of its latest support features and developments from the past year. In its announcement, Netflix said that while it has made “progress in accessibility, [it knows] there is always more work to be done.”

That sentiment is not just for Netflix, nor for the tech industry alone, but for the entire world. While it’s nice to see so many companies taking the opportunity this week to release and highlight accessibility-focused features, it’s important to remember that inclusive design shouldn’t and can’t be a one-year effort. I was also pleased to see that, despite the current fervor around generative AI, most companies this week didn’t seem to cram the buzzword into every support feature or announcement for good reason. For example, Andersson said, “We usually think about user needs” and take a problem-first approach rather than focusing on determining where a type of technology can be applied to a solution.

While it’s probably at least partially true that GAAD announcements are a bit of a PR and marketing game, some of the tools launched today can actually improve the lives of people with disabilities or other needs. I call that a net profit.

All products recommended by Engadget are selected by our editorial team, independent from our parent company. Some of our stories contain affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices are correct at time of publication.

Stay connected with us on social media platform for instant update click here to join our Facebook

  • May 20, 2023