Banner: Load Playlist
Sunday, February 24, 2019

Uncategorised

Bug Reporting

DEBUG LOGGING

From time to time, as part of testing, you might be asked to submit LMS logs. These logs help track down particular problems when a basic description does not suffice.

The LMS and, in turn, the Alexa plugin, support several levels of logging. Normally, you want these set to a low level to ensure that the (quite) verbose debug output does not clog up your log files. If requested to submit details logs, here's how to turn them on;

 

1. Set up extended logging in the plugin

In the LMS settings interface, access the Alexa plugin settings page and scroll to the bottom. Here you will see the 'Extended Debugging' checkbox. Ensure it's enabled and click 'Apply' to save the setting.


2. Configure LMS Logging

Next, still in LMS settings, click on the 'Advanced' tab and choose 'Logging' from the drop-down menu.

First, ensure that the 'save logging settings' is checked at the top of the 'Advanced Log Settings' section;

 

Then, scroll down the page to find the '(plugin alexa) - Alexa' line item. Here, in the drop-down, change the logging level to 'Debug' as follows;

 

Once everything is set, hit 'Apply' to save.

 

3. Finding your Log File

Ultimately, you will need to send your log file for analysis. It's best to have a 'clean' file. That is, one that contains  a fresh LMS session and includes logs of the issue at hand, but nothing else. To achieve this, you will need to;

a) Clear Your Log File

b) Restart LMS

c) Conduct the test

d) Close LMS

e) Submit the log file

 

Your log file will be in a different location depending on your LMS host platform. There are a few ways of finding it. On all platforms,. you can open LMS Settings->Advanced->Logging. At the top of this page, you will see the path to your log file and can even open it in your browser;

 

 

If your LMS is running on Windows, you can access the LMS Control panel, click on the 'Advanced; tab and find a link to the server log files as highlighted here;

 

 

 

In either case, you should shut down your LMS, open the log file in a text editor, clear all contents and save the file. You are then ready to conduct the test.

 

4. Conclusion

It's worth restating: it's best to submit a clean log file that includes LMS startup and the error only. Therefore, before each test, please clean the log file and launch LMS. Save each log individually for submission.

Once your testing is done, revert the logging changes to ensure your log file does not fill unnecessarily.

 

Supported Commands

SUPPORTED COMMANDS

There are broadly 4 types of commands utilised within the skill;

Device Selection & Management
Commands allowing for the query and selection of SqueezeBox devices connected to the LMS

Transport Control
Commands related to playback such as pause, skip etc.

Content Cueing
Commands to find and select Tracks, Favourites and Playlists

Skill Configuration & Information
Commands to tailor how the skill behaves

 

-- This is a draft document. Pre-launch, all available commands will be fully documented here. --

 

 

USING COMMANDS

Overview

In order to use the speech commands, the Alexa Squeeze Box skill must be opened. Like any skill, there are two ways to approach this.

In the first instance, users may simply open the skill as follows;

USER: 'Alexa, Open Squeeze Box'

The skill will respond if successfully opened and wait for a command which may then be spoken without the need to call 'Alexa'

Alternatively, the user may use the 'one-shot' approach by issuing a command while launching the skill;

USER: 'Alexa, tell Squeeze Box to List Devices'

In this case, the skill will launch and enumerate the Squeeze Box devices attached to the LMS.

 

Launch Caveats

While this second method is convenient, it's not fully supported. For example, some commands seem to be reserved for Alexa and don't work reliably as part of the one-shot approach. For example the following won't always work;

USER: 'Alexa, tell Squeeze Box to play playlist Weekend Tunes'

Often, Alexa will ignore the skill name and attempt to play content from a default service, such as Spotify. ('Play' seems to be a reserved keyword). Possible workarounds are;

1. Open the skill, then issue the command;

USER: 'Alexa, open Squeeze Box'

ALEXA: 'What would you like to do?'

USER: 'Play playlist Weekend Tunes'

 

2. Use an alternative form of the command that avoids keywords;

USER: 'Alexa, tell Squeeze Box to play playlist Weekend Tunes'

 

Commnand Syntax

Efforts have been made to implement natural language for interacting with the skill. However, it's a complex skill and people often have very different ways of interacting, or even naming things. (is it a song, track, cut, piece, or something else?).

Users will find that there are several ways of saying things supported in the skill. If something is not working quite the way you think it should, let us know on the forums and we'll try to address with an update.

 

Multiple Commands (in skill)

In cases where a command causes music to commence playback, the skill will always close, and must be launched again in order to issue further commands. (there are some exceptions, see below)

However, in cases where the command does not cause music to play, the skill can be configured to remain listening for further input. This is convenient, for example, where the user wishes to first obtain, say, a list of players, then select one and then set the volume on that device.

The concept of 'extended listening' is built in to the skill. It is 'on' by default but may be configured by voice to user preference. When extended listening is'On', the skill will remain open and prompt for another command, when appropriate.

Re-prompts are also supported and configurable. In conjunction with Extended Listening', re-promots, when enabled, keep the skill listening for an even longer period by re-prompting the user.

The following is a possible interaction when extended listening and re-prompt configuration settings are both set to on;

USER: 'Alexa, Open Squeeze Box'

ALEXA: 'What would you like to do?'

USER: 'List Devices'

ALEXA: 'There are Two devices. Device One, Living Room Radio', is connected. Device Two, Kitchen Touch' is connected. It is the selected device. What next?

USER: 'Select Living Room Radio'

ALEXA: 'Living Room Radio is the selected device' What next?

<waits>

ALEXA: 'Is there anything else I can help you with?'

USER: 'Set the volume to 50 percent'

ALEXA: 'OK, What next?'

<waits>

ALEXA: 'What would you like to do?'

<waits>

<skill closes due to lack of response>

 

Multiple Commands (Audio Player)

As well as supporting multiple commands inside the skill, we leverage the audio player architecture to alloow users issue certain commands after the skill closes.

Normally, a skill needs to be 'open' to accept commands. However, audio player skills allow users to say things like;

USER: 'Alexa, Pause'

Such commands are sent to the most recently used skill by default.Therefore, if the user has interacted with the Squeeze Box skill, and nothing since, they can issue transport commands such as 'play', 'pause', 'next' , 'previous' without launching the skill at all. handy!

 

Volume Diminishing

One of the problems with building a voice control for music playback devices is that those devices are inherently noisy and can interfere with reception of spoken commands.

Echo devices are pretty good at filtering out background noise, and having the pick-up device physically separate from the playback device can help as well, if users are careful about placement.

However, when your SqueezeBox Boom is pumping out tunes with the volume set to eleven (the skill can do it, be sure to try!), it can be difficult for Alexa to understand the user.

To mitigate this, the skill will diminish the volume on the selected device which the skill is open. Users do need to bear in mind, though, that a discrete Echo can control a Squeeze Box located in a different area.The skill will diminish the volume on the selected device, not necessarily the one located in closest proximity.

How It Works

After you complete the setup, using Alexa to control your squeezebox is as easy as giving her simple voice commands. Today, commands achieve simple tasks but they will become more sophisticated as the project matures.

 

The quick version

You say "Alexa, tell Squeeze Box to play"

Alexa passes this command to the plug-in that runs on your Logitech Media Server (LMS)

LMS starts playing the music you have queued up on your Squeezebox

 

The longer version

This project has several components:

  • Alexa Skill
  • Authentication
  • Skill > plugin communication
  • LMS plugin

 

1. Alexa Skill

Alexa is a voice-driven 'digital assistant'. Your Alexa hardware (echo, dot, tap) allows you to access services over an internet connection. Software developed for Alexa is called a Skill. Skills don't run on your hardware – all the action happens in the cloud. The hardware simply transmits your voice to the cloud where it is interpreted by Alexa who actions a Skill.

Therefore, you need to say "Alexa, tell Squeeze Box to play". If you just say "Alexa, play", she will play her default music channel or the last podcast you were listening to. Telling her ‘Squeeze Box’ means she directs your command to that Skill.

(though recent advances in the skill mean that now, once you have initialised the Squeeze Box skill, you can issue some commands directly, eg 'Alexa, pause', provided that 'Squeeze Box' remains your last used skill).

 

2. Authentication

It’s important that Alexa only has access to your squeezebox! When you enable the Skill for the first time, you will be asked for a username and password. These are your hab-tunes credentials. Each time you use the skill, Alexa validates you and your command against the hab-tunes database of known users and commands.

 

3. Skill > plugin communication

The skill sends commands to your LMS using the  MQTT protocol. This data is posted as a ‘topic’ on an MQTT broker server. More on the basics of MQTT here.

Topics are unique to a user (for example /habtunes/JohnJones) and, again, only accessible after verification of hab-tunes credentials.

Note: MQTT doesn’t send anything directly to your network. If it did, you would need to open your network to the internet which is a bad idea for many reasons. The plugin also supports such direct communication but this is available as a fallback and is not the recommended configuration. 

 

4. LMS plugin

The Alexa plug-in runs on your LMS. It listens to (or is 'subscribed' to) your MQTT topic (/habtunes/JohnJones). 

The plug-in pulls any changes to the topic and passes new commands along to LMS. The data posted in the topic is directly executable by LMS, so no further translation is needed, though the data is parsed and validated for security purposes.

In current testing, commands take around 1 second from voice command to LMS execution (AWS is based in Ireland for this project, with measurements taken from  west coast of the US).

Once LMS has executed the command, it publishes data back to the skill over https (http if SSL is not available on LMS host platform). The return data is picked up by the skill and sent back to Alexa in the form of a speech/card response.

 

 

Get Involved

The Alexa Squeeze Box Skill is currently in Alpha testing. This means that new features are being added and undergoing rigorous testing by a team of interested individuals from around the world. These are Squeeze Box users who expressed an interest from an early stage and have suffered through some very rough versions.

We're not adding any further users as alpha testers as there's a significant amount of knowledge, configuration and shorthand under the bridge. Adding new alpha testers would require more explaining than necessary.

Once the skill is feature complete, we will move to an open beta. For this, we will leverage Amazon's skill beta process. 'Feature complete' means that all the functionality that will exist in the first release is present and has been tested as part of the alpha programme. For the beat cycle, we will invite more users interested jn the skill to test it on a  variety of platforms and setups, hopefully finding and fixing any remaining bugs.

To get involved in the Beta, send a message with a little background, (your Squeexebox setup, software experience), and include the e-mail address you use with your Alexa devices / Amazon account (we need this to set you up on the Beta programme).

There's no real criteria for participation apart from an interest and willingness to spend some time reporting bugs. 

If you have previously made contact on slimdevices forums or by direct e-mail, there is no need to do so again.

To participate in the beta, you will also need to set up an account here at Hab-Tunes.com. However, there's no need to do this until the beta starts, and certainly no need to take out one of the paid plans - they are there for testing only!!

The beta cycle will likely start mid-late January 2019.

Philosophy

The idea of the skill is to make it easy to use an LMS-based squeezebox infrastructure hands free. Whether this is a single Squeezebox Radio or a whole-house installation comprising multiple different devices, as long as you are running a single LMS, you can utilise voice control.

The core philosophy is that users configure their setup in LMS, and use the skill for control.

This approach helps understand how decisions were made about what functionality to include (and exclude). For example, the skill allows users to enumerate, select and play favourites and playlists. However, these cannot be configured by voice. You would use the LMS to set these up.

In this way, the skill is focused on day to day activities, and strives to make control easier, or at least provide an alternative approach.

Another key tenet is the ‘always free’ approach. Hab-Tunes is a personal project: one person working in their free time on a hobby project. Nothing more, nothing less. It grew out of a personal desire to link Alexa to Squeezebox and kind of grew legs when discussed on the slimdevices forums.

As such, it’s always been the intention to make the skill freely available. Unfortunately, there are costs. The heavy lifting is done on Amazon Web Services (AWS) and that incurs monthly fees. There’s also annual costs around domain registrations, SSL certs and such like. Not to mention dedicated echoes and squeezeboxes tied up in development / testing.

To balance the ‘available for free’ with the ‘must cover costs’ imperatives, there’s a simple model - users can use the skill to make a set number of LMS calls per month. If more are required, a small payment can be made to unlock unlimited usage. In this way, users can try for free and use the skill in a limited way for no charge, but those who use it a lot and find it useful are asked to make a small contribution.

For more information, see 'How It Works'.

More Articles ...

  1. Setup
  2. About
  3. Privacy Policy

Login Form