The Emergence of Decision Science
- keithprince
- Dec 11, 2018
- 4 min read
When the Teradata Data Base computer was being designed, the world of business management was going through a metamorphosis. Executives and their managers were shifting from a world of ‘gut feel’ and ‘hind-sight’ driven decision making to a new world that demanded decision making based on information, information that was increasingly being stored on computer systems.
In 1979, John Rockart published an HBR article called “Chief Executives Define Their Own Data Needs” in which he proposed that systems used by corporate leaders are better if they provide the data and insights about those business processes and interactions with customers and suppliers that form the bedrock of corporate success. That article became the lightening rod for the launch of what became the ‘Executive Information System’, a new breed of technology specifically designed to improve strategic decision making for the corporation’s executive class.
The nascent success of EIS led to a greater demand to serve the information needs of more than the management of the company, but also those employees and processes that were concerned with the myriad of tactical and operational decisions that needed to be made each and every day. And as the price of computers were continuing to fall, more and more companies were converting their records management from 'paper' to computers.
In the 1980’s Howard Dresner popularised the term Business Intelligence to encapsulate the use of data residing on computer systems to support strategic and tactical decision making at scale, incorporating the underlying theory and capabilities of decision support systems previously advanced by John Little at MIT.
Information Systems that could support this new wave of decision science were needed. The march of the mainframe had taken many corporations as far as a monolithic, one machine for everything approach could. Information was report based and static, offering no easy method for a manager to interrogate the detailed data behind the report or ask ‘what-if’ questions of the data. The explosion in mini-computers as the solution for departmental needs alongside the mainframe was slowing down as corporations found that having so many computer platforms was very costly and made the challenge of having a single version of the truth almost impossible. Often, the burden of reconciliation between so many different systems slowed progress, reduced agility and hurt the pursuit of operational optimisation. In effect, existing technologies were not designed to cope with and improve tactical and operational decisions that needed a high volume of detailed data for ad-hoc, complex analysis.
What was needed was a new computing platform, designed with the needs of the information driven organisation in mind, where any employee could analyse data to improve how the business performed and then measure the impact of those improvements. Such a ‘Test & Learn’ corporation needed a system that made analysis, decision making and measurement easier and quicker than before. But more than that, the platform needed to become a business tool, placed in the hands of anyone in the business rather than the few technocrats that worked in data processing or operations research.
Teradata was designed to be ‘simply powerful’. It was designed from the ground up to solve the acquisition, storage and retrieval of data in a flexible and end-user friendly way. Analysis could be as iterative as anyone wanted. If a manager had a new idea and needed to analyse the data to see the potential impact of that idea, then all that he or she had to do was formulate the question using an English like language, query the data and retrieve the answer.
To store vast quantities of data and allow fast retrieval, Teradata designed a Massively Parallel system that harnessed a large array of microprocessors to slice up the work into small units to speed up the whole process beyond anything a mainframe was capable of. And being one of the first companies to adopt Ted Codd’s Relational Data Model, the data were stored in a way that was intuitive and more easily navigable than the hierarchical and network data bases that already existed. The Relational Data Model also spawned an English like language called SEQUEL (to become SQL later on) that allowed anyone, with just a small amount of training, to formulate how to query the data base and get the information they needed.
By being a specialised system, the Teradata DBC relieved the mainframe of the burden of handling a type of work load it wasn’t designed to cope with and, as a result, the business had access to a tool that it could easily use to create value and establish a competitive advantage.
The compound effect of combining, in parallel, a large number of microprocessors into one system meant that it wasn’t just about brute force. The system’s architecture meant that Teradata had no single point of failure, it could withstand component failures but still deliver answers to the business. It was modular, so it could grow incrementally with the exponential demand for insights. And the building blocks of the Teradata Data Base Computer were based on commodity components, meaning that it didn’t rely on processor, memory or disk storage components that could be here today and gone tomorrow.
At the time when corporations were dreaming about using Decision Science to streamline operations, market more effectively to customers, quantify the risks of doing business more accurately and out-compete their competition, the Teradata Data Base Computer appeared fully formed and ready to provide the ultimate Decision Science business tool that enabled their dreams to come true.
Comentarios