In part one of our History of BI series, we looked at the origins of BI, the way it developed over the 1960’s and 70’s, and the key technologies that emerged within these era’s. The article began by mapping the way data storage changed from hierarchical database management systems (DBMS), like IBM’s IMS in the 60’s, to network DMBS’s and then to relational database management systems (RDMBS) in the late 70’s.
Both the 60’s and 70’s were interesting eras for BI, not least because BI vendors like SAP, JD Edwards and Siebal started offering tools that allowed a non-programmer to dive into the world of data access and analytics. However, while the tools to access and extract data from these mainframe-based systems were extremely powerful, they required particular expertise that many business users did not have. More poignantly, the way data was stored during this time led to “massive anomalies and inaccuracies in data”, resulting in “missing values, partial records, misspelled information, inaccurate data, and more.” As such, the lack of expertise and quality data was the primary reason for why user success rate was low during this period.
However, the 1980’s and 90’s were revolutionary in many aspects for BI, and ultimately transformed the way businesses extracted value from their data.
The Beginning of a Revolution
In the 70’s and early 80’s, a typical business used a collection of large mainframe-based application systems for most of their operations. As mentioned previously, however, there were significant inadequacies with such systems (cost, bulk, difficulty of use, data inaccuracy). Unsurprisingly, businesses began turning to what emerged as a new development in the 80’s: the Information Center. The IC, as it was referred to, was essentially a support center that acted as an intermediary between the non-technical end users and IT. This center was able to establish where data was stored, how to obtain it, and which tools to use in accessing it.
By the late 80’s, however, the emergence of two transformative technologies – the personal computer and spreadsheets – obviated the need for an intermediary like the IC center. Of course, PC’s in the 80’s were nothing like the powerful processors we use today; in the beginning, they consisted only of limited analytical or processing power. But with the release of Lotus 1-2-3, a spreadsheet program from Lotus Software, end users gained control over their own data and, most importantly, tools with which to access and organize it. Computing had officially entered a new era.
Transitioning From Mainframes: DBMS to RDBMS
Nevertheless, the transition from mainframes to personal computers/spreadsheets was not necessarily a smooth nor complete one. Many companies still used a combination of mainframes, distributed systems, fixed-function terminals, several databases, and personal computers to manage their data. The manual navigation that was characteristic of DBMS’s, along with the fact that DBMS’s did not consider relationships between tables, led to a wider adoption of relational database systems (which, as mentioned in our previous article, emerged in the 70’s but did not reach widespread usage until the 80’s and 90’s). Relational databases made it simple to delete and modify details, avoid data duplication and inconsistent records, while also making it easier to maintain security.Most importantly, the development of Structured Query Language (SQL) allowed users to ‘Insert’, ‘Update’, ‘Delete’, ‘Create’, ‘Drop’ table records. Complicated queries could be written to extract data from many tables at once, which significantly helped companies to access and store their data.
Although IBM developed the first RDBMS (System R), Oracle (as “Relational Software, Inc.”) were first to commercialize the technology in 1979, at which point the relational database became the dominant form of bulk storage of our digital economy. Likewise, other BI vendors started to pop up – Crystal Reports (1985), Microstrategy (1989), Business Objects (1990), Actuate (1993). The newly established competition between vendors of RDBMS lead to open standards, enhancements, and a common means for evaluating databases and their tools.
Data Warehousing, ETL and OLAP
One such enhancement was the concept of data warehousing, a term that dates back to the late 1980’s when IBM researchers Barry Devlin and Paul Murphy developed the, “business data warehouse.” At its core, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems to decision support environments. Before data warehousing, completing reporting requests could take a considerable amount of time — often days or weeks — “using antiquated reporting tools that were designed more or less to ‘execute’ the business rather than ‘run’ the business.”
With data warehousing, data that had previously been spread across numerous sources — Online Transaction Processing (OLTP), historical repository of data and external sources — could now be held together in one place and with one querying tool, allowing business users to search the information efficiently and, most importantly, to gather an overarching strategic picture of their model and functions. Additionally, the data warehouse environment includes, as Oracle explains:
“…an extraction, transportation, transformation, and loading (ETL) solution, an online analytical processing (OLAP) engine, client analysis tools, and other applications that manage the process of gathering data and delivering it to business users.”
In essence, Extract, transform, and load (ETL) utilities were developed over
the years to move data from disparate data sources, transform them into the common data warehouse format, and then load them to a common
data warehouse (see right). Once this was accomplished, the data could then be sorted and separated into compressed versions of the data warehouse – “data marts” – where each business unit within an organization could access information specific to their needs. Moreover, these warehouses were specifically designed to support the analytical functions required for business intelligence (OLAP, or online analytical processing), making it possible for users to execute rapid and complex analytical queries.
The technologies and processes that developed in the 80’s and 90’s were groundbreaking in many ways. The emergence of personal computing, RDBMS’s, data warehousing and other technologies, meant that business users could find critical answers to their questions, reduce the time needed for data entry and manipulation, as well as establish synergy between the various departments within their organisations.
Due to this reformation in technology, BI tools proliferated and quickly became much more advanced. Users in the 90’s could turn to Acta, Informatica, Information Builders, Sunopsis, Ascential for information management; Business Objects Microstrategy, BRIO, Actuate, Cognos, Crystal Reports for query and reporting; and Spotfire, Cartesis, Outlooksoft, SRC, Hyperion, Tableau for performance management and data visualization.
In our next segment, we will look at the development of BI in the early 2000’s and give an overview of how the BI landscape grew from one of burgeoning competition between a handful of companies, to what it is today a booming and highly saturated market.
Furhaad worked as a researcher/writer for The Times of London and is a regular contributor for the Huffington Post. He studied philosophy on a dual programme with the University of York (U.K.) and Columbia University (U.S.) He is a native of London, United Kingdom.
Interested in more content like this? Sign up to our newsletter, and you wont miss a thing!
This is the first edition to a three part series and gives a brief overview of the history of business intelligence. Starting in the 1960’s and 70’s, the article looks at the advancements made in data storage, database management systems, and companies that were pioneering BI from the early stages.