Computer Vision / Video Analytics

New Video: Top 5 AI Developer Stories of the Month

Every month we bring you the top NVIDIA updates and stories for developers. In this month’s edition of our top 5 videos, we highlight the latest version of TensorRT 6, plus, we highlight an NVIDIA Jetson-based robodog designed to serve as a service dog.

Watch below:

5 – TensorRT 6 Now Available; Helps Break BERT-Large Record

NVIDIA released TensorRT 6 which includes new capabilities that dramatically accelerate conversational AI applications, speech recognition, 3D image segmentation for medical applications, as well as image-based applications in industrial automation. 

Read more>

4 – Build Your Own Speech Recognition Model with Neural Modules

Neural Modules is an open source toolkit that makes it possible to easily and safely compose complex neural network architectures using reusable components. 

Neural Modules is the platform that allows developers to build new state of the art speech and natural language processing networks easily through API Compatible modules. This platform provides collections to quickly get started with application development. It is being released as open source so users can extend and contribute collections back to the platform. 

Read more>

3 – How a Self-Driving Car Finds Parking

Anyone who’s circled a busy parking lot or city block knows that finding an open spot can be tricky. Faded line markings. Big trucks hiding smaller cars. Other drivers on the hunt. It all can turn a quick trip to the store into a high-stress ordeal.

To park in these environments, autonomous vehicles need a visual perception system that can detect an open spot under a variety of conditions. Perceiving both indoor and outdoor spaces, separated by single, double or faded lane markings, as well as differentiating between occupied, unoccupied and partially obscured spots are key for such a system — as is doing so under varying lighting conditions.

To enable parking space perception, we use camera image data collected in various conditions, and deep neural network processing through our ParkNet DNN.

Read more>

2 – Google Develops ASR System To Help People with Speech Impairments

o help people with speech impairments better interact with every-day smart devices, Google researchers have developed a deep learning-based automatic speech recognition (ASR) system that aims to improve communications for people with amyotrophic lateral sclerosis (ALS), a disease that can affect a person’s speech.

The research, part of Project Euphoria, is an ASR platform that performs speech-to-text transcription. 

“This work presents an approach to improve ASR for people with ALS that may also be applicable to many other types of non-standard speech,” the Google researchers stated in a blog post.  

The approach relies on a high-quality ASR model trained on thousands of hours of standard speech taken from YouTube videos, fine-tuned for individuals with non-standard speech using four NVIDIA V100 GPUs, with the cuDNN-accelerated TensorFlow deep learning framework. For the finetuning, the team collected a dataset of voice recordings from ALS patients at the ALS Therapy Development Institute. 

Read more>

1 – This NVIDIA Jetson-Based Robodog Will Always Listen to Its Owner

This robot not only looks like a dog but he learns like one too! Researchers from Florida Atlantic University’s Machine Perception and Cognitive Robotics Laboratory have just developed Astro the robot dog. 

“Astro doesn’t operate based on preprogrammed code. Instead, Astro is being trained using inputs to a deep neural network – a computerized simulation of a brain – so that he can learn from experience to perform human-like tasks, or on his case, “doggie-like” tasks, that benefits humanity,” the researchers wrote in their project page

Astro’s main duties will be to help the visually impaired as a service dog or to assist medical professionals with medical diagnostic monitoring.

Astro is equipped with sensors, radar, cameras, a directional microphone and a set of NVIDIA TX2 Modules to process the sensory inputs. With the onboard GPUs, Astro is capable of performing up to four trillion computations a second. 

Read more>

Watch more of our Top 5 videos by subscribing to our NVIDIA Developer YouTube channel.

Discuss (0)

Tags