We’re moving Spinitron to a new cluster of servers each with of 8-cores with 128 GiB memory. This is necessary to cope with the growth of database tables and search engine indexes. The IP addresses of our servers will change.
In the past changing server IP address caused some disruption to service no matter what. This time it can be avoided if you take certain steps before we shut down the old servers.
Automation systems
If you are using ENCO, iMediaTouch, WideOrbit or Skylla then you need to update the IP address it uses to send now-playling updates to Spinitron. At present it is probably configured to send to 142.44.138.121.
Change your automation to send to 54.39.125.196 (asap)
Your protocol and port number stays the same – only change the IP addess. Updated instructions to configure your automation systems are in Spinitron in Admin: Automation & API: Automation control panel, including the protocol, IP address and port number to use.
Metadata push
Many of you have metadata push channels sending to devices (RDS encoders, for example) behind firewalls. The firewall is usually configured with a pinhole or port forwarding rule that accepts data only from one of Spinitron’s IP address 142.44.138.121 and 142.44.138.122.
Change your firewall/router to accept ALSO from 54.39.125.196 (asap)
Protocols and port numbers are unchanged – only change the source IP address.
When everyone has transitioned and we cut over metadata push to send from the new address, we’ll notify you again so you can close the other two addresses.
We will gradually start using 54.39.125.196 starting Feb 20 2020 ramping up to Mar 25 at which point the old addresses will be phased out.
Most automation systems including yours use the host name spinitron.com, which isn’t changing. But ENCO, iMediaTouch and WideOrbit don’t accept a host name and only accept an IP address, and that needs to change.
And thanks for asking, @WNMC_Eric. Your question showed how I could make the notice more clear.
Today we started a gradual transition of the push service from using the old source addresses to the new one on tcp:// and udp:// push channels.
From now until Mar 25 2020 the push service will choose at random to send from the new source address 54.39.125.196 or from one of the old source addresses.
The probability of using the new address is starting at about 1% now and it will increase by roughly 3% per day until Mar 25 at which point it reaches 100%.
This applies to tcp:// and udp:// push channels only.
If you have already opened your network equipment (e.g. routers, firewalls) to accept TCP and/or UDP push messages from 54.39.125.196 then you’re all set – you don’t need to make any more changes at present.
If your network equipment is configured to accept TCP and/or UDP push messages only from the old addresses 142.44.138.121 and 142.44.138.122 then
Some of these messages will be lost. The loss will be random and the rate of loss will increase steadily from now until Mar 25 at which point all messages will be lost.
You should set your network equipment to accept also from the new address and then 100% of messages will get through.
If you have any questions or need any help, please contact me here or by email or phone.
Is the transition complete? My university IT department removed the firewall pin holes from the old 142.xxx addresses this past week. I’m only getting about every third song pushed to my RDS encoder from the new 54.xxx address.
Yes, the transition was completed a while back. We’re only sending on 54.39.125.196 afaik.
So we should look for other explanations. I see you have 3 channels all of which seem to be working as far as the metadata push logs can reveal. Is there any problem with the other two? I mean, do the same messages get lost on all channels or is only RDS affected?
If you want, I can put a packet monitor on the destination address and port of the RDS channel and then we can cross check that with the metadata push log. Let me know.
Hi Tom, I’m experiencing the same issue with my metadata push channels. I am experiencing intermittent failures to my RDS, Cirrus and Icecast channels… when the failure occurs, it happens on all three channels simultaneously. I am also having our university IT department look into the issue as well.
Just a quick followup on observations on my end. When spin metadata fails, it seems to fail to all three of the platforms where it is being sent to campus-side (RDS, Icecast, Cirrus). I do see the “failed” spin appear on Spinitron and TuneIn platforms however.
udp://HOST:PORT
<song title="%sn% BY %an%" url="kcpr.org"></song>
It’s impossible for me to observe what happens with Channel 1. And because of the protocol (UDP), the logs on our end tell us nothing.
Channels 2 and 3 appear to be a belt and suspenders way of updating the embedded Now Playing in your live webcast stream. Channel 3. uses UDP again so the logs on our end are useless. Channel 2., the Cirrus Console channel, sometimes has a Connection Refused error and sometimes not and the difference probably reflects success and failure.
It’s possible that Channel 3. to the AXe encoder never works and the variable success of the Cirrus channel entirely accounts for the variable success of the updates on your webcast stream.
I can monitor the stream to maybe see what works and what doesn’t.
But to investigate the variable success with the RDS, we will need to make the actual outcomes on an RDS-enabled receiver observable. Do you have any monitoring of the broadcast RDS? If not, somebody probably has to sit in a car for half an hour or more making detailed notes.
Yesterday I found a weird networking issue that prevented one of our servers from sending metadata push updates on tcp:// and udp:// channels from the special IP address of static.spinitron.com. The job of sending these updates is shared among different servers for redundancy and only one of them was unable to send the updates. This had the effect of making it look like a random effect where sometimes you get the update and sometimes not. It looks like it affected a number of stations.
I suspect that some change in networking at our hosting service provider introduced this effect but I can’t be sure and don’t know when it started.
Anyway, I removed that server from the pool that sends metadata and now @thmorale’s problem seems to be rectified. So probably other affected stations are receiving more updates than before the fix.