Tuesday, 16 January 2018

Where to Use Artificial Intelligence in Your Enterprise?

Artificial intelligence is a concept that is causing many ripples in the technology space. Growth in hardware technologies, analytical models and engines, & finally data are the chief reasons creating this hype. In recent times, we have seen many ground breaking news on AI, ranging from self-driving cars to technology major acquiring AI startup, to defense, to healthcare.
Among all these hype, the basic question for the business world is: How can AI help them bring their cost down and performance efficiencies up!
In my view, AI is still in its nascent stage for adoption by enterprises. Though enterprises have a large pool of data, they are skeptic in terms of how benefits can be availed given their scope and size.
I shall try to trace the use-cases corresponding to stages of application & then to maintenance of infrastructure.
Some examples are:
1. Fast-Coders in the Making AI via Machine Learning has shown its capability to understand human language (e.g. Siri). Siri, not only responds to your queries, but also understands the intent behind your query. Envision a scenario that you are using any SDK to write your code. Now the moment you put // or /* … */ (documentation comments) and write the intent/functionality/use-case of that code in plain English, the bot pulls out the relevant code from code repository (SVN/Team Foundation Server) and helps you complete the code. Alternatively, it can refer to those codes and help you to finish a piece of logic faster!
Therefore, in this case, we have a coding-helper bot, trained on the code repository of the enterprise (for more maturity, code available on GitHub can be used), and can suggest code modules/functions which can be used by the developers for faster coding.
2. Automated Testing Automated testing has become an integrated solution as part of many managed services offerings and is a highly competitive field. Almost all major service providers have presence. For more information, you can refer to any of the analyst reports, Everest etc.
3. Bots for Maintenance AI bots may soon replace physical human beings in doing mundane maintenance tasks like swapping server racks. There may be bots, which are monitoring each of your IT estate and predict network and storage failures, storage limits threshold crossing, temperature regulations etc. Maintenance activities are bread & butter of many IT companies and currently many such companies are working to utilize their expertise to build bots for predictive and preventive maintenance.
4. IoT is here to Stay The concept of tools, devices, objects (electronics used in daily life), and infrastructure being connected to each other and working in tandem to create an ecosystem of smarter & responsive devices brings with it unprecedented convolution. The challenge here is going to be on how to make sense out of all the unstructured data, which can help in deriving actionable intelligence. This is where enterprises will have to use the AI algorithms for classification etc. for actionable acumen.
5. Robust Cyber-security We all know about the two attacks (largest as well) on the security breaches in Yahoo network. Similar case came for Apple as well.
AI can be used to divulge in-progress attacks as it can learn the patterns across devices and network, and report any anomalies! Hence, mitigation action can be taken while the breach/intrusion is still in-progress!
Artificial Intelligence, though far from being accepted as an end-to-end solution, has been adopted in the form of various point solutions at application and infrastructure level. The time is not far when enterprises start taking definitive steps to integrate it within their overall strategic framework to achieve business goals.

Saturday, 13 January 2018

Analytics Driven Managed Services

In today’s world, the amount and volume of data is rising abruptly and rapidly. We see tremendous and continuous disruptions in data analytics, knowledge management, business intelligence and intelligent automation. As such, new trends like Big Data and Analytics have become very pertinent with the CXOs IT landscape modernization and rationalization agenda. As the insights-driven road-maps and strategies take shape, these will become very important source of competitive differentiation in the market. The current challenge for many enterprises is how to utilize and capitalize big data analytics and derive maximum benefits considering the technological disruptions and very light budget.
Breaking down Analytics Driven Decision Making
Analytics refers to the discovery and communication of relevant insights from data. We understand that data-driven decision making emphasize upon quantitative aspects of data: number crunching and proper data processing to come up with results based upon numbers as the underlying facts.
Now, analytics driven decisions takes data-driven decision making to the next step, into the domain of qualitative analysis. This allows for the integration of quantitative and qualitative data and hence another layer of insights gets added which allows for one more layer of consideration and results in showing various data-points which influence the decision making process.
Consider an example, using analytics driven decision making, CXOs can not only focus on which IT lever is causing the main problem, but they can also get insights on what can be done to prevent and predict it before occurring, so that proactive measures can be taken and the business sees no down-time. By concentrating on analytics driven decision-making, enterprises can focus upon important questions of What and Why. It implies that decision-makers can see an overall view of what is happening and why is it happening along with how it can be prevented. It can have an immediate business impact in terms of accurate measurement of key metrics and costs efficiencies.
How to become Analytics Driven?
The key to launching analytics at a corporate level starts with utilizing the right tools and proper training on these tools. For enterprises, which want to leverage the power of analytics into their business domains, experts and tools, customized to their needs is the starting point. For many enterprises, it is logical to assume that this is not their core competency and hence they need external consulting to find ways to integrate business intelligence, revamp performance management, risk mitigation, compliance and governance mechanisms.
Therefore, the important decisions with the CXOs is that before pondering over capitalizing on Analytics, they must have a strategic road-map regarding governance models at information, technological and project levels. This shall ensure alignment of analytics with the core business objectives. This is also necessary to integrate new technologies into existing IT landscape resulting in maximum value derivation to the enterprises.

Friday, 5 January 2018

Beginner’s Guide to Artificial Intelligence, Machine Learning, Neural Network and Deep Learning (Part 2/2)

This article is in continuation to my previous article. You can read that post here.
Artificial Neural networks (ANN) and Neural Networks (NN), is another approach to teach computers to think, decide and decipher the environment like humans. This approach is synonymous to our understanding of human brain (biology): interconnections among neurons. NN are typically visualized as systematic interconnection of neurons, which exchange data or messages among each other. These connections have weights (numbers) that is updated based upon experience, thereby making NN adaptive to inputs and capable of understanding and learning.
Hence, this approach works on probability: based upon input, it gives recommendations or predictions with a certain confidence level. A feedback mechanism enables learning. Hence, by feedback, it understands if its recommendations/predictions are correct or incorrect, and consequently, updates the approach it undertakes for the future event. For example, it can say with 80% confidence that an image is a cat’s image, 10% confidence that it is a leopard’s image, 6% confidence that it is a cheetah and so on – and then the feedback mechanism of the network architecture tells NN if it is correct or incorrect.
Because of the high computation intensity required to run even the most basic neural networks, it was not commercially feasible and not practical. The advent of GPUs in this field is promising and we hope to see some results in near future. The advantage of pursuing NN is that it retains the advantages of machines over humans like speed, lack of bias and accuracy while trying to mimic human brain.
Once we have a basic understanding of NN, let us now shift our focus to Deep Learning. Deep Learning refers to NN that are many layers deep. Deep Learning is deep because of the structure and architecture are ANNs. When NN was conceptualized, they were just two layers deep and I just mentioned earlier, it was computationally not feasible to build large networks. With GPUs, it is possible to build NN with 10+ layers.
Therefore, in deep learning, layers of neurons are stacked on top of each other. The job of lowest layer is to take inputs in the form of text, images, sound etc. Each neuron, then, stores some info about the data elements they encounter. Now, at the above layer, a more abstract version of the data is transmitted. Hence, the higher the layer, the more abstract you learn.
The best use case of ANN is extraction of features from images without any human intervention. Feed ANN an image and it will compute features like colors distribution to something like if a cat is running or sitting. The only requirement of such computation is training of ANNs, which require massive data.
With Big Data sources like Twitter, Facebook, etc. we have data corpus not available 2 or 3 decades back. Still, the challenge lies in cleaning and processing of data into right format, which can be fed to the machine learning algorithms.
I sincerely want to thank Michael Copeland and Bernard Marr (@bernardmarr) for shaping my thoughts on AI over the months.
{The examples used in the above blog are a bit far-fetched and ahead of the current time. They are provided to draw parallelism from the real word and easy understanding.}

Tuesday, 2 January 2018

Beginner’s Guide to Artificial Intelligence, Machine Learning, Neural Network and Deep Learning (Part 1/2)

AI, ML, Neural Networks and Deep Learning are some of the buzzwords of today’s world. They are disrupting the way in which traditional business operate. Many service-based organizations are branding themselves as pioneers and leaders in these frontiers. However, before putting money into any of these AI branded assets, it becomes very important to understand the business use-case of these technologies (on which I shall write later). Currently, I shall focus upon facilitating the beginner’s understanding of these buzzwords.
Artificial Intelligence, in the easiest language, is used when machines can take decisions and perform actions (easy or complex) intelligently and smartly, implying, it can mimic human activities, like learning and solving problems.
AI may be classified into two categories: Applied AI & General AI. Applied AI is what is creating the buzzword in today’s world: autonomous cars (Volvo S60 Drive Me), virtual agents (Louise, the virtual agent of eBay), playing strategic games (Go & Chess) against humans etc. Hence, they are specific to a case in point. General AI is what we have seen in movies like Ultron (from Avengers series) & Ava (from Ex Machina) i.e. they have the capability to mimic the human and can perform actions like those that humans do.
An interesting observation is that actions taken by machines, which were once categorized as intelligent, are no longer considered intelligent. For e.g. Optical Character Recognition. Hence, just like human’s approach (metrics to measure) to intelligence (psychology) varies, the metrics as to what actions define artificial intelligence and what not, may require continuous change.
Now, let us try to understand Machine Learning. ML, in the simplest form, is the ability of the machines to parse data, categorize it, learn that categorization and then perform some actions or give some predictions on cases for which it was not trained. So, rather than the traditional IF… ELSE statements, using algorithms like clustering, decision tree, inductive logic & Bayesian networks, the machine is trained using large volumes of data after which it can perform some task for which it was trained for.
I shall take a very novice example to explain it. Suppose the machine is trained on all the past matches of Roger Federer. His opponents, tournaments, practice sessions, performances at all levels, etc. Now based upon this training, if the machine is able to identify Federer’s odds of winning against any opponent.
Two more “learning” keywords used frequently are Supervised learning & Unsupervised learning.
Supervised learning happens when a bot is trained on corpus of data and the output is defined. If the outputs are defined as classes, then it is a classification problem. If the output is continuous, then it is a regression problem. There are many use-cases defined for classification. For e.g.:
1. To classify, if the financial transaction is fraudulent or not
2. To classify the different types of objects in an image (fruits, vegetables)
3. To classify the given texts into different categories (if the tweet is about football, cricket etc.) in Natural Language Processing (NLP).
Unsupervised learning takes place when the bot starts to learn and take decisions from itself (a concept called self-learn).
Let us understand these two concepts from a real world example.
Case 1: Supervised Learning
Vipul is a kid. He sees different kinds of fruits. His father tells him that this particular fruit is an apple, orange etc. Now a new fruit comes in front of Vipul, which he has not seen before. Vipul identifies it as an apple – and not as a mango, papaya etc.
Here, I had a teacher to guide me and help me learn new concepts, so that when a new object came my way to which I had not been trained, I was still able to categorize and identify it.
Case 2: Unsupervised Learning
Vipul is a kid. He went to North Korea, a country about which he had no prior knowledge – no information on their culture, food, tradition, language etc. However, Vipul tries to learn and make sense of his surrounding – what to eat, how to greet people, how to pray etc.
This is unsupervised learning because in this case, though, I had many data around me, I did not know how to derive meaning out of it or rather what to do with it. Here I had no teacher to guide me and I had to figure out a way on my own. Then, after some time, based upon certain learning, I started processing these data into information categories that made sense.
(The rest of the knowledge will be shared in the second part. )