At ATS, we work at the intersection of Telecom and IT and, in order to focus our efforts on what we’re really good at, we outsource our IT wherever possible. So, naturally, we’ve turned to a few of the big cloud providers for help here. If you’re using our services, you’re also using AWS’ and Google Cloud Platform -- they’re both too big and impressive for any mid-size software company like us to ignore, and we firmly believe our customers are better served when we focus on what we're really good at, which is developing software. But, when you dig a little deeper, you realize that some of the cloud providers' tech offerings tell a tale of how they got big and impressive in the first place. What can the Telco industry learn from the rise of these Retail and Search giants, given what they've exposed as cloud services?
To begin with, let’s start with the Publication & Subscription model they both employ that enables their own analysts to subscribe to most any stream of interesting data the company generates. Think of an analyst at Amazon who wants to just ‘get a feel’ for how a book is doing. “No problem”, their IT group would say, just subscribe to Kinesis stream xyz and do whatever you like. If you work at Google and want to ‘get a feel’ for how many people are using, say, a new feature of Google Maps, the same underlying philosophy would likely apply: “Hey, subscribe to Pub/Sub topic ‘xyz’ and have at it!”. Both tools mentioned (Kinesis and PubSub) do similar things: They separate the act of publishing data (from multiple sources) and consuming it (again, at multiple destinations) by putting an elastic-layer between the two that expands as more people need to publish or consume the data.
In both cases, AWS and Google are leveraging a big bet they made earlier: That thinking in terms of 'data streams' would pay off in the long run, and that it pays to over-build those technologies because the rate of input (raw data going in) and output (downstream consumption by production systems or future analyst-driven initiatives.
Now, let’s contrast that with how things generally work in a Telco, where data tends to be encapsulated in files, and distributed only on an 'as needed basis', generally to a single downstream system. (Think of Call Detail Records batched into files, and mediated from a switch to a billing system.) Now imagine you’re an aspiring analyst looking to make a name for yourself by finding a least-cost-route to Peru and you’re wondering how many calls are happening right now to that location. Where would you start? In the most common model we’ve seen (and as a vendor, we’ve seen a lot of them!), this is how your next 90 days would go:
Request access to some raw CDR data. Go directly to IT.
- Sit on a few conference calls, explaining what you’d like to do.
- Get permission to see some sample data, only to find out it’s in some binary format you’ve never heard of.
- You manage to parse the sample data, and now feel invigorated to try your code on some real, production data.
- Sit on a few more conference calls, eventually finding one guy who, if he has the time, will hook you up with an FTP feed.
- FTP feed? Well, if you’re any kind of guru, you already know that’s File Transfer Protocol, and maybe you’re lucky enough to have some server on your own to receive files on. If you’re really lucky, you’re on the same subnet as the guru who’s going to send it to you.
- You eventually get some trial data flowing to you in real-time, but you can no longer remember why you were ever interested in Peru.
- The guy who started sending you the data goes on vacation, or gets locked out of his end, and can no longer troubleshoot the cron job he set up to send you data.
- Some other guy comes along and transfers the production duties to a new server. Your old shell script was not cut over, so you…
- Go back to step 1...
You'd be exasperated, of course! In future posts, we’ll look at exactly how the ‘big guys’ achieve this, and look at the technologies now available to the rest of us…
In the meantime, however, I’d ask you to think about what the drag-on-innovation this is in your company. It’s been said before that the key to technological innovation is to fail fast, cheap and often … with the occasional small win in the middle!
From the vantage point of a vendor in the telco-space, I’d be the first to admit that -sometimes- this innovation-drag serves us well, particularly by placing a ‘moat’ around our operations once we’re firmly in the door. After all, having lived through the 10 steps above, our customers aren’t relishing a repeat any time soon with our would-be-competitors! That keeps competition at bay, but - as mentioned earlier - kills innovation. (Besides, as often as not, we’re on the outside of someone else’s moat!).
For these reasons, it’s clear that innovative IT departments -- and vendors!! -- will release the shackles of their own data, all the while increasing reliability, throughput and security. The benefits will be enormous when employees (and even lowly vendors!) feel empowered to fail fast, cheap and often!