Birst Interview: Introducing Support For Hana

We spoke with the VP of Product Strategy at Birst, Southard Jones, ahead of the announcement of Birst’s support for SAP HANA. Birst provides an enterprise-calibre Business Intelligence platform based on the cloud. Its approach is designed to be less costly and more agile than Legacy BI and more powerful than Data Discovery. Birst gives business teams the ability to solve problems using data in new ways, while maintaining a uniform approach to how that business information is managed.

Can you give us a brief overview of this announcement?

The idea of in-memory computing, from an analytic perspective, has been talked about for some time now, primarily because of the lightning speed response rates you can get. Depending on what benchmark you read, HANA can query billions of records in close to a second. So far we’ve run a number of those on Birst as part of this support effort, and what we’ve found is that this is accurate for us too.

We first ran a keynote benchmark on one of our larger customer databases and it took 45 seconds to return that query. Then we tried it on a column store database called Infobright and that took about two and half seconds. But when we tried this with HANA it took a staggering 100 milliseconds. 100 milliseconds for a database that has over a billion records is nothing short of astronomical. So, speed is absolutely there with HANA and that’s why we chose it as our preferred in-memory database.

What is it about HANA that makes it so appealing to Birst?

Originally, Birst ran on traditional databases, which works great up to a certain size of data. But when you get to really large datasets, it can have response times that business users are not willing to wait for. What HANA represents is a world class, in-memory database.

We could have attempted to build our own in-memory database, like Tableau and Qlik have tried to do, but it doesn’t solve the real problem – which is, you need to be able to handle terabytes of data and have response times that are less than a second. So when our customers approached us with 20 or 30 terabytes of data, and they also wanted response times at less than a second, there’s only one place really doing that.

The second thing that attracted us to HANA is that we’re a BI company, not a database one. As such, our resources and time is spent ensuring that business users can leverage our tools, rather than have their IT teams run it. Naturally, we wanted to find the best database solution in the market. Tableau and Qlik are great if you want to query on your desktop, something like Excel, but these solutions do not work at the enterprise level.

How will this announcement help differentiate Birst from its competitors?

Ironically, we compete a lot with the data discovery companies like Tableau and Qlik, and this announcement is a big differentiator from them. We have never said that we can build a better in-memory database, and that’s what Qlik and Tableau are saying. What Birst is saying, is that if you want to use in-memory computing, then use the best one out there in the market. Don’t use a desktop version like Qlik and Tableau, but one that is enterprise ready. That’s why this announcement is a big deal for the BI market.

How does Birst’s approach to cloud BI fit with SAP HANA?

In the BI world, what usually happens is that people getting started with the HANA world will try HANA 1.0, which is only available in Amazon Web Services. So, you go to Amazon and you buy HANA by the hour. You can also deploy Birst Instance in Amazon Web Services as well. With this, you can run HANA and Birst together on demand – so, from this perspective, HANA and Birst is a great marriage.

More to the point, 25 percent of our customers take Birst and deploy it as a virtual appliance – that is, the exact same code base and architecture that is used in our public
cloud can be deployed on a customer’s premises where they can run it on their HANA appliance. This is where Birst shines; large enterprises that want to use HANA in their datacentre can leverage their Birst appliance right there with their HANA appliance.

We’re staying true to the roots of Cloud BI in that it is low management, run by business users, and updated in real-time. Just now, you can run Birst on your data centre.

How can in-memory computing be brought together with BI?

The key to bringing in-memory to BI is three things:

1)   Business users expect Google-like response times. So when you’re talking about terabytes and petabytes of data, there is no other solution than in-memory. The speed of the response time is really the biggest benefit of in-memory.

2)   In-memory has to be manageable by a business user and today this simply is not the case. Unless you’re talking about tools like Qlik or Tableau, which can’t really scale like HANA, you need a tool that can handle large data sets. This is where Birst and in-memory work really well; all of the manageability piece is automated through our architecture.

3)   A recent trend that we have seen is that business users are increasingly uninterested in where their data is coming from. All they really want is to ask business questions. This is why a business layer, or a semantic layer, on top of a traditional database is fantastic. But this layer on top of in-memory gives the business user the flexibility and speed that they need. In-memory alone is fast, of course, but it asks data questions, whereas in-memory with BI asks business questions.

Where is SAP looking to expand the use of HANA and where can BI help this process?

Where HANA has got a lot of traction is in SAP accounts and on SAP-type data. Where HANA has not been as successful is data outside of that world. HANA is so much stronger than any other in-memory BI solutions out there – with Tableau and Qlik, they can handle about 100-200gb if you’re lucky. With HANA, it’s only getting warmed up on these numbers and really you can throw terabytes at it and it doesn’t even flinch.

When you think about what the power of BI is, it really is the 90% of data you’ve produced in the past year and it’s not simply in your corporate warehouse. But what are you going to do with this data?

What you need is something that’s fast, low management, low overhead and has response times with sub-second queries. There’s only one solution for this, from a database perspective, which is HANA. Birst is the only BI tool which can be on top of it. When you put Birst on top of HANA and you think about it, really this is the only way to get HANA out into the business users’ hands; no business user can say that they can run HANA, but every business user can say that they can run Birst.

Now you’ve got HANA out into the hands of the world, you can begin to analyse SalesForce data, Marketo data, and social media. You don’t have to wait for the people in IT who are up to their necks with SAP support requests to deliver results.

Is The Price of HANA An Issue?

A doubter would say that it’s the best out there but it’s too expensive. And that’s true; it’s by no means cheap. What I would say, however, is that HANA 1 is fairly affordable. And there is an approach that could give you the best of both worlds, which is storage of your data in an insanely cheap disk-based solution like Amazon Redshift, combined with the pooling of aggregates or your most common queries into an SAP HANA 1 in-memory solution. What you have here is an affordable in-memory solution with an incredibly cheap way of storing terabytes of data, yet you still get response times of sub-second. That’s what we can do, and no one else is capable of this.

This ‘multitude of databases’ is something I think will become more popular in the BI world over the next 6-12 months. Some people won’t necessarily be able to afford to run HANA on terabytes of data but that type of mixture of storage on disk at really cheap rates with the speed of in-memory computing, will start to evolve as a desired or best practice in BI database design.

This interview was conducted by Furhaad Shah. Furhaad is an Editor at Dataconomy focusing on Business Intelligence, Analytics and Data Security. You can contact him here:

Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!



Previous post

TIBCO: Big Data is 'Irrelevant', Fast Data is What You Need

Next post

Brian Gentile, ex-CEO of Jaspersoft, Speaks With Dataconomy