If you’re a developer with a product that has a microphone and a speaker, then you can integrate the Amazon Voice Service (AVS) API into your device. Once the updated API has been integrated properly, it will listen for and respond to the Alexa wake word. This API can add hands-free speech recognition, along with cloud-enabled automatic detection of end-of-user speech to the product you’re currently developing.
To help out, Amazon has even released a prototype project that can be built for the Raspberry Pi. To make this project happen, Amazon got together with Sensory and KITT.AI so they could leverage their 3rd-party wake word engines. The project should take you less than an hour, in order to create this hands-free Alexa prototype with the Raspberry Pi. So be sure to check out the sample AVS app over on the Alexa GitHub page.
Amazon has been working with a number of 3rd-party developers to get Alexa integrated into their devices and now it’s become even easier. To learn more about Amazon’s Alexa Voice Service API, and to learn which implementation is best for you, you can head over to the Designing for AVS landing page that Amazon has set up for developers. Three examples they give for typical applications are push-to-talk, tap-to-talk, and voice-initiated (with a wake word).
There won’t be a one size fits all solution for every project out there. If you’re interested in integrating Alexa into your product, you should get an idea of what your product is capable of first. From there, you can build out the type of Alexa integration you think will be best for your customer. Amazon also offers examples for their automatic speech recognition profiles, hardware and audio algorithms, and they even talk about the specifics of noise reduction, acoustic echo cancellation, as well as beamforming.
from xda-developers http://ift.tt/2e3vQuX
via IFTTT
Aucun commentaire:
Enregistrer un commentaire