Ensuring Accuracy in Usage Based Billing for Broadband

Posted by Ryan Guthrie on Sep 10, 2014 10:05:00 AM
Find me on:

Interview with Ryan Guthrie by Dan Baker, Contributor to B/OSS

Years ago, all-you-can-eat plans, the staple of cable/DSL high speed data billing, were great for attracting customers. But today the popularity of YouTube, Netflix, Hulu, and countless other video outlets have turned the tables on service profitability.

Thus, in the U.S. cable internet market, all signs now point to the widespread (and soon) adoption of usage-based billing – the same kind of tiered, usage-capped billing plans now popular in cellular.

But many cable/DSL providers are wholly unprepared for this new reality: accurately tracking usage brings a brand era of complexity – not to mention added costs – to the cable/DSL billing equation. So how can operators prepare for usage based billing and ensure accuracy while doing so?

Ryan Guthrie, Marketing VP at Advanced Technologies and Services (ATS) has an answer: plain old revenue assurance, with an internet usage twist. ATS is a small, boutique engineering firm who, judging by Ryan’s command of the switch-to-bill arena, is one of the best-kept-secrets of its Tier 1 telco and growing cable provider base of clients.

Dan Baker: Ryan, I understand the history of your firm goes back to the testing of calls off the wireline switch.

Ryan Guthrie: Yes, for wireline operators, a lot of the work we still do is based on SimCall, our call simulation solution.

With SimCall we take a brain dump of a Class 5 switch and through reverse engineering we simulate calls to the switch to figure out how it would rate and route. We then build a expectations database to say: based on the company’s business and rating rules, how should that call have been rated and routed? Would it have rated as a local call or a toll call?

Baker: How does your call simulation approach differ from the test call generation robots offered by companies like Roscom and The Boardroom?

Guthrie: It’s true, those companies offer a similar solution using their call generation test boxes which they deploy throughout the network. They’ve been our biggest competitors.

But we feel our approach is superior because SimCall takes translations off our server and merely simulates a call. That’s better than having to physically make the calls because number of calls you can make each day is limited. By contrast, we can simulate hundreds of thousands of calls into a switch in a matter of 5 to 10 minutes.

I’ll admit that the hardware test call generators actually trace the calls down to a billing feed, and we only do that as a separate service. But in our experience, that depth is often not necessary because call simulation literally checks every call in a switch in ten minutes.

We provide a “big data” style of call testing: we test of all switches so you can find all the outlier errors; in that way we avoid having to design a sampling plan or try to guess where the errors will occur.

In the wireline world, the CDR that gets cut from the switch tells you whether the call is a local or toll call, after which the mediation system rates it appropriately. SimCall does not check the rating scheme, but validates that the definitions are created properly in the switch. If I call you, the call gets rated as a toll call versus local call based on LERG data, business rules, and class of service. It’s pre-mediation check, raw off the switch.

Baker: Does your call simulation service work in the mobile world too?

Guthrie: Mobile is a different animal in so many ways. The switches are, of course, quite different. In mobile, we’re often hired to do dropped-call-analysis. For instance, certain cell phones drop more calls than others; so we trace handset serial numbers to find which batches from a particular cellphone maker are delivering more dropped calls.

The biggest need is to measure traffic expectations. We deliver KPI dashboards to monitor traffic by switch, day, and hour, then build a statistical model. The operator can then ask: on Friday at 4:00 PM on my switch in Trenton, New Jersey, how many minutes would I expect to get and how many minutes did we actually get?

If the KPI passes a two standard deviation threshold, the system triggers an alarm. It the volume is too high, you suspect fraud; if the volume’s too low then maybe files weren’t taken off the switch or an outage occurred.

An interesting difference between wireline and mobile is that in wireline, the decision on how to rate the call (local or toll) is made at the switch. But in mobile, rating happens later on in the mediation stream. For that reason, we often get a dump of the mediation tables and simulate calls through mediation to do the same compare against expectations.

Baker: OK, let’s get into the Cable/DSL internet market, what’s the issue these operators face as they move toward usage-based billing?

Guthrie: Here’s the key question we ask in cable internet: Is usage being guiding to the right place to do accurate usage-based billing?

If you have high speed internet at your home, the MAC address at your home needs to populate several different downstream systems to ensure that usage is properly guided downstream to the bill.

To figure that out, we get extracts from various points. For instance, we get a table listing the MAC addresses of all modems on the network. Then we’ll get a mediation feed and an inventory database, and data from billing. Sometimes the wrong keys get passed down. So the MAC address might be in the network inventory, but not in your billing account.

The trick is to ensure the physical ID and billing ID remain consistent all the way from usage generation to the bill – in that way you’re sure that the usage on your bill is not someone else’s.

Now as we’ve done these MAC address checks for operators, we’ve found a huge discrepancy rate – in the order of a 10% to 15% error rate.

Baker: Wow, that’s a pretty high error rate. So how does your system go to work? What do you look at and deploy to analyze these issues?

Guthrie: We do a usage accuracy analysis by putting our Unix boxes in the network where they act as customer accounts. We script them to generate traffic at set points in the day and from different locations. And again we get feeds from the aggregation system, mediation, and billing.

We call this a Reference Element Data Comparison: it’s nothing more than taking the traffic we generate to do switch-to-bill audits.

Now usage-based billing is one key concern, but another is doing accurate speed caps. Most of the cable/DSL operators are managing speed caps today. At Cablevision, for example, you can pay for a 10 Mbit/sec or 20 Mbit/sec line.

So to check speed caps we do a provisioning-to-billing comparison to ask: What speed and volume cap are you set for in your provisioning and are you actually getting billed for that? Am I paying for 2 Mbits/second but getting 10 Mbits? So there are over- and under-billing opportunities here.

Another key piece of analysis is Subscriber Consumption Modeling. Here we build a statistical model to say: how much data do I expect to get on a Friday at 4:00 and how much do I actually get? And, of course, we alarm the operator if something is out of whack.

Another useful case is developing forecast models for future consumption. The data boom is on and so people have to build out their networks accordingly, so our statistical models are proving to be invaluable forecasting tools.

Baker: Ryan, I can see that these new revenue assurance checks in cable/DSL are a variation on the tricks used in wireline/mobile for some time. How do you touch the big data world with your analysis techniques?

Guthrie: All the use case I’ve discussed so far run on standard Unix technology and databases. But we do touch big data in a couple areas.

First, we store huge quantities of data for customers to meet lawful intercept rules which sometimes require storing data for 7 years. We also supply subpoena support for accessing that data using cloud computing and big data technology.

Now large carriers have lived with these lawful intercept requirements for several years now, but it looks like these requirements will soon be extended to smaller carriers who will also need to comply.

A second big data application for us is social network analysis: combing through CDRs to determine who you talked to and how often you received calls from others and for how long. That data is being used widely in marketing campaigns and churn analysis.

The use cases here are all of over the map One popular campaign technique leverages a friends and family type offer. If there’s four Verizon and one AT&T customer who regularly call one another, Verizon can detect that and say: “Gee, this AT&T customer is talking to folks in my network quite a bit, so we should really bring him into the fold because we’re paying AT&T these interconnect fees.” So what Verizon could do is send out a promotion to the four Verizon customers saying, “Here’s a friends and family offer. Bring a friend onboard and get a $100 discount off your next phone purchase.”

So instead of spending $10,000 buying a billboard ad, you can send a very targeted offer that’s much smarter investment.

Baker: Thanks, Ryan. The technical details and use cases you’ve given us are very interesting.

Topics: Usage Based Billing, Usage Meter Accuracy, Broadband