Wednesday, February 20, 2019

Kubernetes - Minikube start dashboard for a Web UI

For those developing solutions that should run under Kubernetes, running a local version of Kubernetes leveraging Minikube can make your life much more easy. One of the question some people do have is how to ensure they can make use of the Kubernetes Web UI.

Running the Kubernetes Web UI while working with Minikube is relatively easy and you can start the Web UI with a single command of the minikube CLI. The below command showcases how to start the Web UI and also have your local browser open automatically to guide you to the correct URL.

Johans-MacBook-Pro:log jlouwers$ minikube dashboard
🔌  Enabling dashboard ...
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:60438/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/ in your default browser...

As you can see from the above example the only command needed is 'minikube dashboard'.


The above screenshot shows you the Kubernetes Web UI in the browser as started by the minikube command.

Friday, February 15, 2019

Python Pandas – consume Oracle Rest API data

When working with Pandas the most common know way to get data into a pandas Dataframe is to read a local csv file into the dataframe using a read_csv() operation. In many cases the data which is encapsulated within the csv file originally came from a database. To get from a database to a csv file on a machine where your Python code is running includes running a query, exporting the results to a csv file and transporting the csv file to a location where the Python code can read it and transform it into a pandas DataFrame.

When looking a modern systems we see that more and more persistent data stores provide REST APIs to expose data. Oracle has ORDS (Oracle Rest Data Services) which provide an easy way to build REST API endpoint as part of your Oracle Database.

Instead of extracting the data from the database, build a csv file, transport the csv file so you are able to consume it you can also instruct your python code to directly interact with the ORDS REST endpoint and read the JSON file directly.

The below JSON structure is an example of a very simple ORDS endpoint response message. From this message we are, in this example, only interested in the items it returns and we do want to have that in our pandas DataFrame.

{
 "items": [{
  "empno": 7369,
  "ename": "SMITH",
  "job": "CLERK",
  "mgr": 7902,
  "hiredate": "1980-12-17T00:00:00Z",
  "sal": 800,
  "comm": null,
  "deptno": 20
 }, {
  "empno": 7499,
  "ename": "ALLEN",
  "job": "SALESMAN",
  "mgr": 7698,
  "hiredate": "1981-02-20T00:00:00Z",
  "sal": 1600,
  "comm": 300,
  "deptno": 30
 }, {
  "empno": 7521,
  "ename": "WARD",
  "job": "SALESMAN",
  "mgr": 7698,
  "hiredate": "1981-02-22T00:00:00Z",
  "sal": 1250,
  "comm": 500,
  "deptno": 30
 }, {
  "empno": 7566,
  "ename": "JONES",
  "job": "MANAGER",
  "mgr": 7839,
  "hiredate": "1981-04-02T00:00:00Z",
  "sal": 2975,
  "comm": null,
  "deptno": 20
 }, {
  "empno": 7654,
  "ename": "MARTIN",
  "job": "SALESMAN",
  "mgr": 7698,
  "hiredate": "1981-09-28T00:00:00Z",
  "sal": 1250,
  "comm": 1400,
  "deptno": 30
 }, {
  "empno": 7698,
  "ename": "BLAKE",
  "job": "MANAGER",
  "mgr": 7839,
  "hiredate": "1981-05-01T00:00:00Z",
  "sal": 2850,
  "comm": null,
  "deptno": 30
 }, {
  "empno": 7782,
  "ename": "CLARK",
  "job": "MANAGER",
  "mgr": 7839,
  "hiredate": "1981-06-09T00:00:00Z",
  "sal": 2450,
  "comm": null,
  "deptno": 10
 }],
 "hasMore": true,
 "limit": 7,
 "offset": 0,
 "count": 7,
 "links": [{
  "rel": "self",
  "href": "http://192.168.33.10:8080/ords/pandas_test/test/employees"
 }, {
  "rel": "describedby",
  "href": "http://192.168.33.10:8080/ords/pandas_test/metadata-catalog/test/item"
 }, {
  "rel": "first",
  "href": "http://192.168.33.10:8080/ords/pandas_test/test/employees"
 }, {
  "rel": "next",
  "href": "http://192.168.33.10:8080/ords/pandas_test/test/employees?offset=7"
 }]
}

The below code shows how to fetch the data with Python from the ORDS endpoint and normalize the JSON in a way that we will only have the information about items in our dataframe.
import json
from urllib2 import urlopen
from pandas.io.json import json_normalize

# Fetch the data from the remote ORDS endpoint
apiResponse = urlopen("http://192.168.33.10:8080/ords/pandas_test/test/employees")
apiResponseFile = apiResponse.read().decode('utf-8', 'replace')

# load the JSON data we fetched from the ORDS endpoint into a dict
jsonData = json.loads(apiResponseFile)

# load the dict containing the JSON data into a DataFrame by using json_normalized.
# do note we only use 'items'
df = json_normalize(jsonData['items'])

# show the evidence we received the data from the ORDS endpoint.
print (df.head())
Interacting with a ORDS endpoint to retrieve the data out of the Oracle Database can be in many cases be much more efficient than taking the more traditional csv route. Options to use a direct connection to the database and use SQL statements will be for another example post. You can see the code used above also in the machine learning examples project on Github.

Wednesday, February 13, 2019

resolved - cx_Oracle.DatabaseError: ORA-24454: client host name is not set

When developing Python code in combiantion with cx_Oracle on a Mac you might run into some issues, especially when configuring your mac for the first time. One of the strange things I encountered was the ORA-24454 error when trying to connect to an Oracle database from my MacBook. ORA-24454 states that the client host name is not set.

When looking into the issue it turns out that the combination of the Oracle instant client and cx_Oracle will look into /etc/hosts on a Mac to find the client hostname to use it when initiating the connection from a mac to the database.

resolve the issue
A small disclaimer, this worked for me, I do expect it will work for other Mac users as well. First you have to find the actual hostname of your system, you can do so by executing one of the following commands;

Johans-MacBook-Pro:~ root# hostname 
Johans-MacBook-Pro.local

or you can run;

Johans-MacBook-Pro:~ root# python -c 'import socket; print(socket.gethostname());'
Johans-MacBook-Pro.local

Knowing the actual hostname of your machine you can now set it in /ect/hosts. This should make it look like something like the one below;

127.0.0.1 localhost
127.0.0.1 Johans-MacBook-Pro.local

When set this should ensure you do not longer encounter the cx_Oracle.DatabaseError: ORA-24454: client host name is not set error when running your Python code.

Tuesday, February 12, 2019

Python pandas – merge dataframes

Pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. When working with data you can load data (from multiple type of sources) into a designated DataFrame which will hold the data for future actions. A DataFrame is a Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns).

In many cases the operations you want to do on data require data from more than one single data source. In those cases you have the option to merge (concatenate, join) multiple DataFrames into a single DataFrame for the operations you intend. In the below example, we merge two sets of data (DataFrames) from the World Bank into a single dataset (DataFrame) in one of the most basic merge manners.

Used datasets
For those interested in the datasets, the original data is coming from data.worldbank.org, for this specific example I have modified the way the .csv file is provided originally. You can get the modified .csv files from my machine learning examples project located at github.

Example code
The example we show is relative simple and is shown in the diagram below, we load two datasets using Pandas read_csv() into their individual DataFrame. When both are loaded we merge the two DataFrames into a single (new) Dataframe using merge().


The below is an outline of the code example, you can get the code example, including the used datasets from my machine learning examples project at github.

import pandas as pd

df0 = pd.read_csv('../../data/dataset_4.csv', delimiter=";",)
print ('show the content of the first file via dataframe df0')
print (df0.head())

df1 = pd.read_csv('../../data/dataset_5.csv', delimiter=";",)
print ('show the content of the second file via dataframe df1')
print (df1.head())

df2 = pd.merge(df0, df1, on=['Country Code','Country Name'])
print ('show the content of merged dataframes as a single dataframe')
print (df2.head())

Monday, February 11, 2019

Secure Software Development - the importance of dependency manifest files

When developing code, in this specific example python code, one thing you want to make sure is that you do not develop vulnerabilites. Vulnerabilities can be introduced primarily in two ways; you create them or you include them. One way of providing an extra check that you do not include vulnerabilties in your application is making sure you handle the dependency manifest files in the right way.

A dependency manifest file makes sure you have all the components your application relies upon are in a central place. One of the advantages is that you can use this file to scan for known security issues in components you depend upon. It is very easy to do an import or include like statement and add additional functionality to your code. However, whatever you include might have a known bug or vulnerability in a specific version.

Creating a dependency manifest file in python
When developing Python code you can leverage pip to create a dependency manifest file, commonly named as requirments.txt . The below command shows how you can create a dependency manifest file

pip freeze > requirements.txt

if we look into the content of this file we will notice a structure like the one shown below which lists all the dependencies and the exact version.

altgraph==0.10.2
bdist-mpkg==0.5.0
bonjour-py==0.3
macholib==1.5.1
matplotlib==1.3.1
modulegraph==0.10.4
numpy==1.16.1
pandas==0.24.1
py2app==0.7.3
pyobjc-core==2.5.1
pyobjc-framework-Accounts==2.5.1
pyobjc-framework-AddressBook==2.5.1
pyobjc-framework-AppleScriptKit==2.5.1
pyobjc-framework-AppleScriptObjC==2.5.1
pyobjc-framework-Automator==2.5.1
pyobjc-framework-CFNetwork==2.5.1
pyobjc-framework-Cocoa==2.5.1
pyobjc-framework-Collaboration==2.5.1
pyobjc-framework-CoreData==2.5.1
pyobjc-framework-CoreLocation==2.5.1
pyobjc-framework-CoreText==2.5.1
pyobjc-framework-DictionaryServices==2.5.1
pyobjc-framework-EventKit==2.5.1
pyobjc-framework-ExceptionHandling==2.5.1
pyobjc-framework-FSEvents==2.5.1
pyobjc-framework-InputMethodKit==2.5.1
pyobjc-framework-InstallerPlugins==2.5.1
pyobjc-framework-InstantMessage==2.5.1
pyobjc-framework-LatentSemanticMapping==2.5.1
pyobjc-framework-LaunchServices==2.5.1
pyobjc-framework-Message==2.5.1
pyobjc-framework-OpenDirectory==2.5.1
pyobjc-framework-PreferencePanes==2.5.1
pyobjc-framework-PubSub==2.5.1
pyobjc-framework-QTKit==2.5.1
pyobjc-framework-Quartz==2.5.1
pyobjc-framework-ScreenSaver==2.5.1
pyobjc-framework-ScriptingBridge==2.5.1
pyobjc-framework-SearchKit==2.5.1
pyobjc-framework-ServiceManagement==2.5.1
pyobjc-framework-Social==2.5.1
pyobjc-framework-SyncServices==2.5.1
pyobjc-framework-SystemConfiguration==2.5.1
pyobjc-framework-WebKit==2.5.1
pyOpenSSL==0.13.1
pyparsing==2.0.1
python-dateutil==2.8.0
pytz==2013.7
scipy==0.13.0b1
six==1.12.0
xattr==0.6.4

Check for known security issues
One of the most simple ways to check for known security issues is checking your code in at github.com. As part of the service provided by Github you will get alerts, based upon dependency manifest file, which dependencies might have a known security issue. The below screenshot shows the result of uploading a Python dependency manifest file to github.


As it turns out, somewhere in the chain of dependencies some project still has a old version of a pyOpenSSL included which has a known security vulnerability. The beauty of this approach is you have an direct insight and you can correct this right away.

Sunday, February 10, 2019

Python Matplotlib - showing or hiding a legend in a plot


When working with Matplotlib of visualize your data there are situations that you want to show the legend and in some cases you want to hide the legend. Showing or hiding the legend is very simple, as long as you know how to do it, the below example showcases both showing and hiding the legend from your plot.

The code used in this example uses pandas and matplotlib to plot the data. The full example of this is part of my machine learning example repository on Github where you can find this specific code and more.

Plot with legend
The below image shows the plotted data with a legend. Having a legend is in some cases very good, however in some cases it might be very disturbing to your image. Personally I think keeping a plot very clean (without a legend) is the best way of presenting a plot in many cases.
The code used for this is shown below. As you can see we use legend=True

df.plot(kind='line',x='ds',y='y',ax=ax, legend=True)


Plot without legend
The below image shows the plotted data without a legend. Having a legend is in some cases very good, however in some cases it might be very disturbing to your image. Personally I think keeping a plot very clean (without a legend) is the best way of presenting a plot in many cases.

The code used for this is shown below. As you can see we use legend=False
df.plot(kind='line',x='ds',y='y',ax=ax, legend=False)

Thursday, January 31, 2019

machine learning - matplotlib error in matplotlib.backends import _macosx


When trying to visualize and plot data in Python you might work with Matplotlib. In case you are working on MacOS and you use a venv, in some cases you might run into the below error message:

RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. If you are using (Ana)Conda please install python.app and replace the use of 'python' with 'pythonw'. See 'Working with Matplotlib on OSX' in the Matplotlib FAQ for more information.

The reason for this error is that Matplotlib is not able to find the correct backend. The most easy way to resolve this in a quick and dirty way is to add the following line to your code.

matplotlib.use('TkAgg')

This should remove (in most cases) the error and your code should be able to run correctly. 

Tuesday, January 29, 2019

Machine learning - Supervised machine learning and decision tree classifiers


When working with machine learning, and especially when you start learning machine learing one of the first things you will encounter is supervised machine learning and writing decision tree based classifiers. A supervised classifier which will leverage a decission tree to classify an object into a group will base itself on provided and already labled data.

The data will (in most cases) consist out of a number of features all describing labled objects in your training data. As an example, provided by Google, we could have a set of objects all havign the label apple or orange. In our case we have mapped apple to the numeric value 0 and orange to the numeric value 1.

The features in oour dataset are the weight of the object (it being an apple or an orange) and the type of skin. The skin of the object is either smooth (like that of an apple) or bumpy (like that of an orange). We have mapped bumpy to the value 0 and smooth to the value 1.

The below code showcases the implementation and also a prediction for a new object compared to the data we have in the learning data.

from sklearn import tree

features = [[140, 1], [130, 1], [150, 0], [170, 0]]
labels = [0, 0, 1, 1]

clf = tree.DecisionTreeClassifier()
clf = clf.fit(features, labels)

print clf.predict([[150, 0]])

As you can see in the above example we predict what the object with [150, 0] will be, will it be an apple or an orange. The above example is just a couple of lines and is already a first example of a simple machine learning implementation in Python. The reason it will only take this limited amount of lines is that we can leverage all the work already done by the developers of scikit-learn.

You can find the above code example and more examples on my Github project page.


[SOLVED] OSError: [Errno 2] "dot" not found in path.

Python Data Visualization
When trying to visualize data using pydot in Python you might run into an error where it is stated that “dot” is not found in the path. This is after you already ensured you installed Pydot and ensured you did an import of Pydot in your python code. The main reason for this is that your python code is unable to find the dot executable. The dot executable comes from the graphviz project. This means that even though you did an install of the Pydot you are still missing a critical component.

If we look at the Pydot Pypi page you can see already a hint on this as it will tell you the following; Pydot is an interface to Graphviz and can parse and dump into the DOT language used by Grapgiz. Pydot is written in pure Python.

To resolve this is we can use yum to install Graphviz on Linux, in our case we use Oracle Linux.

yum -y install graphviz

this command will ensure that Graphviz is installed on your local Oracle Linux operating system, to check if the installation has been completed as expected you can use the below command to check the version;

[vagrant@localhost vagrant]$ dot -V
dot - graphviz version 2.30.1 (20180223.0356)

Now, if you run your python code and try something like pydot.graph_from_dot_data to work with dot data and visualize it at a later stage you will see you no longer have the issue you faced in the form of the OSError: [Errno 2] "dot" not found in path error faced before.

Friday, January 18, 2019

Oracle JET - data-bind as text or html

When using the KnockoutJS data-bind option while developing Oracle JET applications you will have the option to use the data-bind in combination with a text option or a html option. In most cases developers will use the text option to bind a javascript view/model variable in the html view code. In most cases variables will contain a value that needs to be displayed in the context of the html code, however, in some cases the variable can also contain html code itself. When a variable contains html markup code you will have to handle this differently when defining the data-bind in the view.

The below example screenshot displays the same variable used in a data-bind once with a text option and once with a html option. The first is using the text option and due to this the html code for the underline is not rendered. In the second line we use the html option and the underline markup code is rendered and the line is underlined in the view.


The below code example is showing the Oracle JET knockoutJS html view code, the code can also be seen as a gist on github.


The below code example is showing the Oracle JET knockoutJS html viewModel code, the code can also be seen as a gist on github

Thursday, January 17, 2019

Oracle JET - Knockout data-bind

When using Oracle JET for building your application you automatically make use of Knockout.js as this is a vital part of how Oracle JET works. Part of Knockout is the data-bind option. Knockout’s declarative binding system provides a concise and powerful way to link data to the UI. It’s generally easy and obvious to bind to simple data properties or to use a single binding. Understanding data-bind is one of the basic parts you need to grasp to be able to develop an application with Oracle JET.

You can bind variables defined in the viewModel (javascript) and make sure they become part of the view (HTML) code.  In the below diagram you can see the binding between the Oracle JET View and the Oracle JET View/Model


Knockout data-bind example
As you can see in the below example screenshot we have displayed the value of someVariable0. The value of someVariable0 is set in the view/model

The below example code showcases the view/model, within IncidentsViewModel funtion you can see we assign the value to the variable which has been defined outside of the IncidentsViewModel function. You can view the example code as on GitHub via this Gist link.



To ensure we bind the IncidentsViewModel variable IncidentsViewModel defined in the view/model .js file we have to make sure we bind this in the html code of the view. The below example showcases how this binding is done as part of a HTML span using the data-bind option. You can view the example code as on GitHub via this Gist link.



In effect, the above example showcases the most basic form of a knockout based Oracle JET binding between the view and the view/model.

Tuesday, January 15, 2019

UX as part of your Enterprise Architecture

Digitalization within enterprises is still growing rapidly, enterprises are more and more adopting digitalization in every aspect of the daily processes and are moving to more intelligent and integrated systems. Even though a lot of work is being done in the backend systems and a lot of systems are developed and modernized to work in the new digital era a large part of the work has to do with UX User experience.

A large number of enterprises are still lacking in building a good and unified user experience for internal users. It has been thought for long that user experience was more applicable for the external systems such as websites, webshops and mobile applications. It is however evenly important to have a good and clear view on the internal user experience.

Internal user experience
Internal users, your employees, will use the systems developed on a daily basis. Ensuring the systems are simple to use, do what they promise and provide an intuitive experience will add to the productivity. Additionally, ensuring that systems are easy to work with and provide a good experience will ensure that your employees are more motivated and adoption of new systems will be higher

UX as an enterprise architecture component
In the past, it was common that every system within an enterprise would have a different experience. Menu structures, screen structures and the way a system behaved was different per application. As an employee normally interacts with multiple systems this can become overwhelming and complex. Additionally, it is relatively common that all internal enterprise user experiences are, to put it mildly, not that good. Most common, every system has a suboptimal interface and an interface design which is different from the rest.

An advised solution is to include standards for UX and interface design into the Enterprise Architecture repository and ensure, depending on your enterprise size, you have dedicated people to support developers and teams to include your enterprise UX blueprints within the internal applications.

When UX and interface design is a part of the enterprise architecture standards and you ensure all applications adhere to the standards the application landscape will start to become uniform. The additional advantage is that you can have a dedicated group of people who build UX components such as stylesheets, icons, fonts, javascripts and other components to be easily adopted and included by application development teams. At the same go, if you have dependency management done correctly, a change to a central UX component will automatically be adopted by all applications.

Having a Unified Enterprise UX is, from a user experience and adoption point of view one of the most important parts to ensure your digital strategy will succeed. 

Add UX consultants to your team
Not every developer is a UX consultant and not every UX consultant is a developer. Ensuring that your enterprise has a good UX team or a least a good UX consultant to support development teams can be of a large advantage. As per Paul Boag the eight biggest advantages of a UX consultant for your company are the following:
  1. UX Consultants Help Better Understand Customers
  2. UX Consultants Audit Websites
  3. UX Consultants Prototype and Test Better Experiences
  4. UX Consultants Will Establish Your Strategy
  5. UX Consultants Help Implement Change
  6. UX Consultants Educate and Inspire Colleagues
  7. UX Consultants Create Design Systems
  8. UX Consultants Will Help Incrementally Improve the Experience

Adopt a UX template
Building a UX strategy from scratch is complex and costly. A common seen approach for enterprises is that they adopt a template and strategy and use this as the foundation for their enterprise specific UX strategy.

As an example of enterprise UI and UX design, Oracle provides Alta UI which is a true enterprise grade user experience which you can adopt as part of your own enterprise UI and UX strategy. An example is shown below:

The benefit of adopting a UX strategy is that, when selected a mature implementation, a lot of the work is already done for you and as an enterprise you can benefit from a well thought through design. Style guides and other components are ready to be adopted and will not require a lot of customizations to be used within your enterprise so you can ensure all your applications have the same design and the same user experience. 


The above shown presentation from Andrejus Baranovskis showcases Oracle Alta UI Patterns for Enterprise Applications and Responsive UI Support

Monday, January 07, 2019

The adoption of chatbots in the enterprise market

We have seen the rise of chatbots in the past couple of years, more and more customer facing websites do implement a chatbot as part of the customer experience. Even though most people have had a negative experiences with chatbots the way they work is improving rapidly. Where chatbots used to be clumsy and not really good this is rapidly changing. The AI models behind chatbots is improving rapidly and they become more and more "human". As the maturity of chatbots is growing we see a growing adoption with chatbots by enterprises for both customer facing as well as internal facing communication.




As part of a Forbes article on the digital transformation trends in 2019 Chatbots have been placed on second place in the list.
  1. Chatbots Good to Great: Hear me out on this one. I know we’ve all had extremely frustrating chatbot experiences as we round out 2018. But the good news is that huge steps continue to be made in the way of natural language processing and sentiment analytics—so many, in fact, that some believe NLP will shake up the entire service industry in ways we’ve never imagined. Think about all the services that could be provided without humans—fast food lines, loan processors, job recruiters! What’s more, NLP allows companies to gather insights and improve their service based on them. Some 40% of large businesses have or will adopt it by the end of 2019—which makes it one of our top 2019 digital transformation trends. Now, I know many are alarmed by where AI and Chatbots may impact the workforce, but I’m also bullish that companies are going to be upskilling their work forces rather than displacing them as machines may be good at delivering on clearcut requests but leave a lot to be desired when it comes to dealing with empathy and human emotion required to deliver great customer experiences."
Introducing a chatbot in the organisation
Enterprises in general are implementing chatbots for two main reasons; improving the efficiency to communicate with customers and improving internal processes. A commonly seen model is that enterprises take a two phase approach to introducing chatbots to the business.

Phase 1 - Internal use
In phase 1 chatbots are implemented and used to optimize internal processes. for example standard internal HR processes, supporting internal requisitions and internal IT support are commonly seen as first adopters of a internal enterprise chatbot.

Phase 2 - External use
In phase 2 chatbots are used externally facing as part of the enterprise website, shopping site or as part of enterprise mobile applications.

In general phase 1 and phase 2 overlap, while the go-live of phase 1 is in effect phase 2 is already being prepared for external use. By creating the correct overlap the momentum of the chatbot team is maintained and the lessons learned from phase 1 are included in phase 2. It is important from both a team velocity as well as an adoption point of view to ensure you keep the momentum and ensure an overlap or a minimal gap between phase 1 and phase 2.

It is not done in a day
Contradicting the popular believe that building and implementing a chatbot is an easy task one will have to prepare for a "real project". Even though the use of a cloud platform and chatbot framework can speed the technical implementation up extremely a healthy part of the work is in ensuring your chatbot has the correct vocabulary and ensuring your conversation design is properly done.

Two aspects are important when developing your chatbot project planning. The first is to ensure you enough space for conversation design and ensuring the right vocabulary. Conversation design will go into design of how a flow of a conversation between your bot and a human will go. Even though this might sound straightforward initially it might be a very good practice to ensure you have an experienced conversation design expert on your team.

The other important part is to include a marity model for your chatbot in your project planning and strategy. The moment you want to launch internal and the moment you want to launch externally might be on a different point in the maturity model. An example of a chatbot maturity model, developed by Leon Smiers at Capgemini, can be seen below.



Use a chatbot framework
Building a chatbot from the ground up, building all the AI and all the other parts needed to make a good chatbot is an amazing project. However, such a project is only interesting from a technical understanding and research point of view and not so much from a business point of view. As a developer who just wants to build and include a chatbot interaction it is a better solution to leverage an existing platform. As an Example, Oracle provides a intelligent chatbot platform.

The below developer conference video showcases how to build a chatbot.



You can find more information and developer code examples via this link to get started quickly with your first intelligent chatbot to include in your enterprise landscape. 

Wednesday, December 12, 2018

Oracle CX - act upon a negative customer experience

Customer experience is becoming more and more important both in B2B as well as in B2C. When a customer is having a bad experience with your brand and/or the service you are providing the changes that this customer will purchase something or will be a returning customer in the future is becoming very unlikely. Additionally, if the customer is having a good customer experience however is lacking the emotional binding with your brand or product the changes that he will become a brand advocate is becoming unlikely.

The challenge companies are facing is that it becomes more and more important to ensure the entire customer experience and the customer journey is perfect and at the same time an emotional binding is created. As a large part of the customer journey is being moved to the digital world a good digital customer journey is becoming vital for ensuring success for your brand.

As more and more companies invest heavily in the digital customer journey it is, by far, not enough to ensure you have a good working and attractive looking online shopping experience.  To succeed one will have to go a step further than the competition and ensure that even a negative experience can be turned into a positive experience.

The negative experience
The below diagram from koobr.com showcases a customer journey which has a negative experience in it. Is shows that the customer wanted to purchase an item and found that the item was not in stock.

This provides two challenges; the first challenge is how to ensure the customer will not purchase the required items somewhere else and the second challenge is how to ensure we turn a negative experience into a positive experience.

Turning things positive
In the example for koobr.com a number of actions are taken on the item not in stock issue.

  • Company sends offer for their website
  • Company emails when item is in stock
  • Company tweets when item is in stock

This all takes the assumption that we know who the customer is or that we can get the user to reveal who he is. In case we do not know the customer, we can display a message stating that the customer can register for an alert when the items is in stock and as soon as the item is in stock a discount will be given. The promise for a discount on the item in the future also helps to make sure the customer will not purchase somewhere else.

Making a connection
The way you can contact the customer when the item is back in stock depends on the fact if we know who the customer is and which contact details we have from this customer. If we assume that we know who this customer is we can provide a discount specific for this customer only or provide another benefit.

The default way of connecting with a customer in a one on one manner is sending out an email to the mail address we have registered for this customer. A lot of other methods are however available and depending on the geographical location and demographic parameters better options can be selected.

As an example;

  • A teenage girl might be more triggered if we send her a private message via Facebook messenger.
  • A young adult male in Europe might be more triggered if we send a private message via WhatsApp.
  • A young adult female in Asia might be more triggered if we use WeChat
  • A Canadian male might want to receive an email as a trigger to be informed about an item that is back in stock
  • A senior citizen might be more attracted if a phone call is made to inform him that the item is back in stock. 


Only depending on email and a generic tweet on twitter will provide some conversion however much less conversion than might be achieved when taking into account more demographic parameters and multiple channels.

Keep learning
One of the most important parts of a strategy as outlined above is that you ensure your company keeps learning and ensures that every action as well as the resulting reaction are captured. In this case, no reaction is also an action. Combining constant monitoring of every action and reaction and a growing profile of your individual customer as well as the entire customer base provides the dataset upon which you can define the best action to counteract a negative experience as well as ensuring a growing emotional bonding between your customer and your brand.

Integrate everything
When building a strategy like this it needs to be supported by a technology stack. The biggest mistake a company can make is building a solution for this strategy in isolation and have a new data silo. Customers are not interested in which department handles which part of the customer journey, the outside view is that of the company as one entity and not as a collection of departments.

Ensuring that your marketing department, sales department, aftercare department, web-care department and even your logistical department and financial department make use of a single set of data and add new information to this dataset is crucial.

To ensure this the strategy needs to make use of an integrated solution, an example of such an integrated solution is the Oracle Cloud stack where for example the Oracle Customer experience social cloud solution is fully integrated with Oracle marketing, services, sales and commerce.

Even though this might be the ideal situation and provides a very good solution for a greenfield implementation a lot of companies will not start in a greenfield, they will adopt a strategy like this in an already existing ecosystem of different applications and data stores.

This means that breaking down data silos within your existing ecosystem and ensuring that they provide a unified view of your customer and all actions directly and indirectly related to the customer experience is vital.

In conclusion
Creating a good customer experience for your customers and building an emotional relationship between customer and brand is vital. Nurturing this is very much depending on the demographical parameters for each individual customer and a good customer experience as well as building a relationship requires having all data available and capturing every event.

Adopting a winning strategy will involve more than selecting a tool, it will require identifying all available data, all data that can potentially be captured and ensuring it is generally available to select the best possible action.

Implementing a full end to end strategy will be a company wide effort and will involve all business departments as well as the IT department.