While v 0.1.4 didn't see much change in terms of new functionality, it was a major shift towards a final skill infrastructure in that it saw migration to an amazon EC2 instance for the main skill logic. The update went very smoothly. Despite a little hiccup caused by bad planning for the transition on my part, there were hardly any issues arising from the 13 testers who adopted this version.
As part of the rollout, the current limited command set was optimised, particularly the play/pause commands which benefitted from fewer interactions between the skill and the LMS plugin resulting in an approximate 33% improvement in overall skill response times for these commands.
However, there's still a significant geographic discrepancy in that users in western Europe enjoy measured skill response times less than 1s whereas those in the US are averaging 4s and Australian users often endure 6s or more while awaiting responses. This is explained of course by the selection of western Europe as the EC2 host instance (as the skill developer, I can enjoy the luxury of choosing a location close to home!). This si something I'll need to address in the future though.
There are a few approaches to this. In the first instance, I can gauge the location of users and move the skill logic to an EC2 instance in a closer region to the majority of users. The other option would be to duplicate the skill logic across two or more regions. Indeed, Amazon allows for this in the skill set-up where separate endpoints can be set for EU and US users. However, long term, there will be a cost penalty and we're going to have to wait and see if we can cover the required fees for two EC2 instances from any revenue that might be generated.
In researching this aspect, I came across this really useful comparison of transfer speeds between Amazon regions. I think if I was going for a second EC2 instance, California would provide the best average speeds for both US and Australia.
v0.1.5 is now underway and the main focus is implementing a test workaround for the requirement to have an open port on the client side. My initial plan was to use MQTT but this would have required considerable setup on both the server and client sides. Discussions on forums.slimdevices.com led me to consider WebSockets instead and it turns out these are trivial to set up on node-red so that end is already done. On the LMS plug-in side, there's a lot of additional work required, extra libraries etc. but progress is steady there as well.
The plan is to first of all replace the initial http 'command ready' transactions from the skill. This will remove the need for the open port and http call from skill to LMS that kicks off each command. Once that's shown to work in v0.1.5 testing, I'll look at integrating web sockets even further so that the entire transaction is conducted across them. This should also be faster overall.