Monday, June 26, 2017

Aloha v0.2.1 demo

A quick demo of some new features coming in alpha v0.2.1

 

 

We have new 'Next' and 'Previous' commands that can be used to skip back and forth through playlists.

Not only that, but for the elected device, these commands can be issued without invoking the skill at all. As long as the Squeeze Box skill (hab tunes skill in the demo) is the most recently invoked audio playback skill, simply saying 'Alexa, Next', is enough to skip to the next track on the currently selected player.

Plus, this version has experimental support for streaming LMS library content directly to through the echo itself. Right now, it kind of works, but there's some more work and a lot of testing to determine how stable it is and how viable it is for inclusion in the release version.

Some usage stats

I've been collecting statistical data on tester interactions throughout the development cycle, mostly to get an idea of round-trip times for spoken commands. Here's some of the data visualised covering the period from v0.1.0.4 to now (v0.2.0.5).

In total, there are around 6,500 records. I've removed anything above 10 depends as that relates to a failed attempt that could be due to any number of reasons. I've also removed any very short attempts as they are likely unprocessed requests due to users not being account linked.

Read more: Some usage stats

Perl dependency error installing SqueezeServer

One of the issues arising in testing is that some plugin code is failing on perl v5.24

May main development is on an older iMac and testing is in Ubuntu VMs, all of which seem to max out at v5.22. I downloaded Ubuntu server 17 and did a quick install under VMWare to get a system with perl 5.24 but squeezeserver 7.9.0 installation failed with an error something like squeezeboxserver depends on perl (>= 5.8.8); 

Thankfully, I found the solution here and I'm posting here for future reference as I'm sure it will come up again;

edit /etc/apt/sources.list
and add the line
deb http://http.us.debian.org/ squeeze main contrib

then proceed with

# apt-get install -f

Alpha Progress

We're now on test round 11 with Alpha v0.2.0.3. The purpose of the Alpha stage is to expand the available commands to the final set and test as we go. So far, the following commands have been added;

STATUS
We previously has a 'List Devices' command which would return an enumerated list of devices connected to the LMS with their connected status. A tester suggested that we could extend this lits to include volume and other parameters. Rather than do that, I decided to keep the list devices command a basic list and add a new 'Status' command. Now, saying, 'Alexa, tell Squeeze Box to give me a status update (for player X)' will return a detailed list for the nominated device or all devices indicating number, name, connected status, playing status, volume and whether muting is enabled or not. This will later be expanded to include details of the track/stream playing.

POWER
The skill now includes commands to control the power state of discrete or all devices, switching them on or off as required.

EXTENDED LISTENING
One of the frustrating voice UX aspects of echo interaction, particularly for this skill, is having to ask 'Alexa, tell squeeze box to...' for every command. Alexa does support sessions allowing the skill to remain open to listen for additional commands that don't need to be prefixed with the wake work or skill invocation. Initial testing revealed a number of issues;

The main problems were sounds from 3x principle sources being picked up and generating phantom commands. These sources were;

  • Alexa herself reacting to the tail end of her own prompts (!!)
  • Music from the users squeeze box
  • Extraneous and environmental sounds

The first one was dealt with by introducing a very short pause after Alexa speaks and before she listens. The second is addressed  by lowering the volume of squeezeboxes while she's listing. The third is addressed by allowing users turn off extended listening if environmental noise is an issue

So, it now works like this;

  • 'Extended Listening' is enabled by default
  • Users can set it on or off by saying 'Alexa, tell squeezebox to turn extended listening on/off'
  • Users can check the status by asking 'Alexa, ask squeeze box to give me my settings'
  • If extended listening is Off, the skill will only continue listening after the user says 'Alexa, Open Squeeze Box'
  • If it's on, she will continue listening unless the command is likely to play or not stop music (play, resume, set volume)
  • In cases where extended listening is active, volume on all squeezeboxes is reduced to 10%
  • Volume is restored to previous levels when she stops listening
  • There's a failsafe of 60 seconds, after which the volumes are restored automagically


This feature is the focus of v0.2.0.x testing round and we're making slow but steady progress on refining it. The inspiration here is how inbuilt music playback skills work. For example, if Alexa is playing music from Spotify and the user speaks the wake word, the volume is automatically reduced for the duration of the listening session. Unfortunately, a custom skill does not get notification of the wake word and only knows the user is speaking once the command has been issued. Therefore, we cannot be as slick as the inbuilt skills. but we can get close.

Here's a demo of some of these features;



We've also welcomed a number of new testers on board. It's great to have an infusion of new blood and they're already finding previously unknown issues due to specific configurations etc.

Moving to alpha

v0.1.6 is currently in pre-testing and, all going well, will be released  to all testers over the weekend.

This is a significant step up as the LMS plugin now includes both an http proxy and MQTT as options for communications between the skill and the LMS plugin. MQTT implementation was the primary focus of attention and testing in v0.1.5.x and it seems to work well. Response times are better than with the http proxy and further improvements in v0.1.6 and future versions will only help with stability and effectiveness.

I've also set up a Repo for the plugin meaning installation and updating is less command line and more GUI. With this in place, the plugin is more or less feature complete and future work will be focused on bug fixes and optimisations.

That being the case, I'm pleased to announce that this release marks the commencement of the Alpha test phase of the project. Still a long way to go and still the potential to completely bork your LMS, but a milestone nevertheless.

On the skill side, I've added support for volume control to the existing play/pause commands. The focus on the Alpha phase will be the expansion of the command set to cover a well-rounded sub-set of LMS / Device functionality. (@nickb has kindly agreed to the proposal that this skill and his excellent DIY Secure Integration would benefit from sharing a common core command set thus allowing user try out both and feel familiar in each.) Thus that set of commands (with some extensions) will be the focus of development over the next few releases. Once they are implemented, we'll move on to Beta.

With this release, I'm also widening the tester base. There have been several expressions of interest and offers of help over the past few weeks and I'll be contacting those users this weekend and inviting them in to the test portal.

Here's the full list of changes and improvements over the past few versions;

Read more: Moving to alpha

Login Form

About Hab-Tunes

Hab-Tunes is an in-development skill for the Amazon Echo that allows voice control of Squeezebox devices. This site is intended to document the skill development and help give some idea of progress.

Follow Updates

Search #habtunes on twitter

Want to Help?

If you can contribute time, skills or ideas, find out how you can get involved.

Cron Job Starts