In this post we'll take a brief look at the implications of some of these changes on traditional business intelligence and analytic capabilities. Think of this as a quick survey of the landscape and not a deep dive into its many facets. Let's start with a seemingly esoteric NoSQL capability: graph processing and databases.
Graph Processing and Databases
We benefit from graph technology every day. Let's look at a visualization and some use cases.Real time recommendations on retail web sites, the sometimes creepy "friend recommendations" on Facebook. Both echo chamber and gold mine: social network analytics for marketing and advertising. GPS applications like Waze or google maps. A little less obvious are use cases like network and operations root cause analysis, contextualizing real time streams of event and other data from the internet of things' sensors. Finally, saving money and sometimes lives via fraud detection, cybersecurity, and medical research.
When the relationships of things to each other are as important as the things themselves - graphs are attractive. Beyond the ubiquity of this hidden technology notice that unlike traditional BI and analytics for some key use cases, e.g. recommendations, identity and access management, it can be real time.
Before we move on to look at NoSQL more generally, it's important to recognize a few things about graph processing and graph databases. Graph databases often compete as OLTP tools e.g. Neo4j. Large scale graph processing is more an OLAP capability e.g. Giraph, Pregel, so the graph's data may sit in another NoSQL distributed data store. Having said that, the intuitive nature of the graph database lends itself to OLAP.
Depending on the scale and type of the analysis graph queries using languages like cypher or gremlin are often dramatically simpler to write than similar queries in SQL. Also, queries across many layers of related things perform orders of magnitude faster in a graph database than in a relational database.
Before we go any farther we'll need to have at least a passing understand of how NoSQL enables big data.
NoSQL - A Quick Overview
What's happened to the "relational" world? Data warehouse data marts and multi-dimensional cubes for OLAP, along with SAS, once ruled the BI and data analysis landscape. Yet open sourced projects like R, Hadoop, Cassandra, Spark are in the forefront of the now "big" data world. Even SQL database OLTP dominance may be challenged by the modern graph databases like Neo4j's ACID support, ease of use, and lightening speed for some use cases.
NoSQL - could anyone have picked a worse or more misleading name? Every major cloud provider offers SQL based API's to many of their "noSQL" data sources. SQL tends to connote relational data stores and tables. Yet, graph databases are intrinsically relational and the open sourced versions of google's distributed, non-relational data store (now categorized as Column based noSQL) is called BigTable/MapReduce. Oh well, at least Column based noSQL is not really tabular.
NoSQL type characteristics overlap suggesting innovation will continue. My advise is to think "not only SQL" when you think of NoSQL.
NoSQL type characteristics overlap suggesting innovation will continue. My advise is to think "not only SQL" when you think of NoSQL.
Wikipedia's current taxonomy of relevant types of NoSQL along with example implementations.
- Column: Cassandra, HBase
- This type is often part of a distributed data store which has many benefits beyond the scope of this conversation. For now we'll just say it allows reliable, fast querying agains huge amounts of data across many computers, or nodes, at once
- Map Reduce often handles querying where "map" sorts and filters and "reduce" summarizes. Other common query tools include Hive and Pig
- A column of a distributed data store is its lowest level object. It is a tuple (a key-value pair) consisting of three elements: A unique name, a value (or set of values), and a time stamp to determine if the content is valid or stale.
- Example columns:
- street: {name: "street", value: "1234 x street", timestamp: 123456789},
- city: {name: "city", value: "san francisco", timestamp: 123456789}
- This data often resides on a Hadoop filesystem and may be performance optimized via Spark
- Graph: Neo4J, Apache Giraph, MarkLogic
- Graph databases uses nodes which represent entities like person, edges which represent relationships, and Properties which can be associated with both nodes and edges
- Working with graph databases is generally intuitive as the storage model directly reflects natural language rather than being burdened with the physical implementation details of most other data stores e.g. relational or column
- Relational databases use keys to relate entities to one another leading to "joining" one to many tables. Graph databases use pointers to relate one entity to another and may capture properties about the relationship. The deeper the levels of relationships the more this database differentiates itself from relational stores.
- Compared with relational databases, graph databases are often faster for associative i.e. highly related, data sets and map more directly to the structure of object-oriented applications. They can scale more naturally to large data sets as they do not typically require expensive join operations. As they depend less on a rigid schema, they are more suitable to manage ad hoc and changing data with evolving schemas.
- Conversely, relational databases are typically faster at performing the same operation on large numbers of data elements.
- Multi-model: MarkLogic, OrientDB
- Supports multiple data models against a single, integrated backend. Document, graph, relational, and key-value models are examples of data models that may be supported by a multi-model database.
- An article by Martin Fowler suggests this type will become more prevalent over time Polyglot Persistence
- Document: MarkLogic, MongoDB
- A document-oriented database, or document store, is designed for storing, retrieving, and managing document-oriented information
- Graph databases are similar, but add another layer, the relationship, which allows them to link documents for rapid traversal.
- Key-Value: Redis, Oracle NoSQL
- Manages data in associative arrays, a data structure more commonly known today as a dictionary or hash.
- Dictionaries contain a collection of objects, or records, which in turn have many different fields within them, each containing data. These records are stored and retrieved using a key that uniquely identifies the record, and is used to quickly find the data within the database.
- Very easy for developers to use e.g. to persist objects. Computationally powerful for some use cases. Some graph databases underlying implementations are key-value store.
Dealing with the Big in Big Data
A simple example:
Let's take a simple comparison: doing analysis of invoices versus sensor data from the internet of things. Traditionally for invoices you might extract the invoice data from your transactional data store, transform it into dimensions and facts, load it into your data mart to run reports or do sophisticated analysis by loading it into a cube. For gigabytes of data this works great (if requiring a lot of money to set up - more on that in a minute).
Sensor data sets are often much, much larger, think terabytes. Yet, we can simplify and say the data has many dimensions (e.g. time, sensor type) and sensor readings (facts) to assess. Using ETL to create a data mart is impractical. Enter "big data" with its distributed data stores and analytic capabilities.
Let's take a more accessible, if over simplified, example than sensor data. Perhaps Macy's wants to do analysis of the last 50 years of sales invoices. Rather than put all of that data into a data mart they might leverage the distributed computing power of Hadoop and MapReduce. They could create a text file for each invoice and then distribute the invoices across many data stores in the cloud. Next they'd leverage MapReduce to do their summarizations across many computers in parallel.
Turns out this processing can be done very quickly. Programming is involved but a no where near the cost of using a more traditional approach. The aggregations etc. that this processing does can then be fed into a tool like SAS (or R) to do more in-depth, ad-hoc analytics.
Getting more granular:
The term "text file" above could use some expansion. These could be realized as NoSQL columns, hive "tables" etc. Hive is a great example of why we might choose "not only SQL" as the expansion of the term NoSQL. It exposes a SQL API to interact with virtual tables.
As the complexity of the problem solved grows so does its implementation details. For example, designing the structure of an HBASE database is non-trivial.
In the simple example above we say the outputs of MapReduce can then be input to a tool like SAS for ad-hoc querying. Apache SPARK enables Hive to present users complex, OLAP style ad-hoc query capabilities agains huge distributed data stores.
Spark creates a flexible abstraction called the "resilient distributed dataset" that aggregates data across many computers in a cluster. This aggregation overcomes MapReduce's limitation of requiring a linear data flow where you map a function across data and then reduce the results onto disk. It overcomes this limit by creating shared memory across the computers in the cluster enabling iterative passes over the data. Put more simply, it brings OLAP (and machine learning capabilities) to cloud tools including Hadoop's HDFS, Cassandra, Amazon S3, Open Stack's Swift...
However, Spark also necessitates additional complexity such as the use of cluster managers. It has its own native manager as well as provides support for Apache Mesos and Hadoop YARN. Once again we see google innovation at work. Mesos conceptually descends from google's omega scheduler. It has used Omega to manage its services at scale. We'll stop here as cluster managers are a topic onto themselves.
Finishing up
This post summarizes a huge space in a few pages. I hope you found it useful. I've tried to describe a very complicated space in a simple manner. If I've misrepresented rather than simplified, please comment so I might update this post.
So we started big lets end big. Facebook's network of friends has caused it to be a pioneer in the graph processing. They have scaled graph processing up to handle a trillion edges ie. relations! Read about that here. Here's a visual of their conceptual architecture.
Artificial Intelligence (AI) is the big thing in the technology field and a large number of organizations are implementing AI and the demand for professionals in AI is growing at an amazing speed. Artificial Intelligence (AI) course with 360DigiTMG will provide a wide understanding of the concepts of Artificial Intelligence (AI) to make computer programs to solve problems and achieve goals in the world.
ReplyDeletehttps://360digitmg.com/artificial-intelligence-ai-course-training-in-hyderabad
Nice post..
ReplyDeleteSAP mm training
SAP pm training
SAP PP training