Monday, November 19, 2018

I've been collecting statistical data on tester interactions throughout the development cycle, mostly to get an idea of round-trip times for spoken commands. Here's some of the data visualised covering the period from v0.1.0.4 to now (v0.2.0.5).

In total, there are around 6,500 records. I've removed anything above 10 depends as that relates to a failed attempt that could be due to any number of reasons. I've also removed any very short attempts as they are likely unprocessed requests due to users not being account linked.

The average execution time shows the average time in ms for commands to be processed from the point at which the reach the skill to the point at which the skill returns a response. Response times are faster depending on geographic location, those of us in Ireland are faster as the EC2 instance hosting the skill is located here. However, there's a good geographic spread of test participants across Europe and the US so the averages above are quite representative.

The shorter times for some commands is explained in that they do not requires a call to the users LMS and are processed exclusively inside the skill. Commands with longer round trip times require two calls to the users LMS. The combined average is just over 1s.



The data in intent frequency is also interesting. It's skewed a little in that some commands were not available for the full duration of the sample period but it's good guidance on what commands can be prioritised for optimisation and testing.

Login Form

About Hab-Tunes

Hab-Tunes is an in-development skill for the Amazon Echo that allows voice control of Squeezebox devices. This site is intended to document the skill development and help give some idea of progress.

Follow Updates

Search #habtunes on twitter

Want to Help?

If you can contribute time, skills or ideas, find out how you can get involved.