Tuesday, March 21, 2017

Oracle Cloud - architecture blueprint - Central logging for microservices

When you engage in developing a microservices architecture based application landscape at one point in time the question about logging will become apparent. When starting to develop with microservices you will see that there are some differences with monolithic architectures that will drive you to rethink your logging strategy. Where we will have one central server, or a cluster of servers where the application is running within a monolithic architecture you will see in a microservices architecture you will have n nodes, containers, instances and services.

In a monolithic architecture you will see that most business flows run within a single server and end-to-end logging will be relative simple to implement and later to correlate and analyze. If we look at the below diagram you will see that a a call to the API gateway can result in calls to all available services as well as in the service registry. This also means that the end-to-end flow will be distributed over all different services and logging will for some parts also be done on each individual node and not in one central node (server) as it is the case in a monolithic application architecture.

When deploying microservices in, for example, the Oracle Public Cloud Container Cloud Service it will be a good practice to ensure that each individual docker container as well as the microservice push the logging to a central API which will receive the log files in a central location.

Implement central logging in the Container Cloud Service
The difference between the logging from the microservice  and the Docker container deployed in the Oracle Public Cloud Container Cloud Service is that the microservice will be sending specific logging of the service which is specific developed during the development of the service and which is being send to a central logging API. This can include technical logging as well as functional business flow logging which can be used for auditing.

In some applications the technical logging is specifically separated from the business logging. This to ensure that business information is not available to technical teams and can only be accessed by business users who need to undertake an audit.

Technical logging on container logging is more the lower level logging which is generated by docker and the daemon providing the needed services to enable to run the microservice.

The above diagram shows the implementation of an additional microservice for logging. This microservice will provide a REST API capable of receiving JSON based logging. This will ensure that all microservices will push the logging to this microservice.

When developing the mechanism which will push the log information, or audit information, to the logging microservice it is good to ensure that this is a forked logging implementation. More information on forked logging and how to implement this while preventing execution delay in high speed environments can be found in this blogpost where we illustrate this with a bash example.

Centralize logging with Oracle Management Cloud
Oracle provides, as part of the Public Cloud portfolio, the Oracle Management Cloud and as part of that it provides Log Analytics. When developing a strategy for centralized logging of your microservices you can have the receiving logging microservice push all logs to a central consolidation server in the Oracle Compute Cloud. You can have the Oracle Management Cloud Log Analytics service collect this and include this in the service provided by Oracle.

An example of this architecture is show on a high level in the below diagram.

The benefit of the Oracle Management Cloud is that it will provide an integrated solution which can be included withe other systems and services running in the Oracle Cloud, any other cloud or your traditional datacenter.

An example of the interface which is provided by default by the Oracle Management cloud is shown above. This framework can be used to collect logging and analyze it for both your docker containers, your microservices as well as other services deployed as part of the overall IT footprint.

The downside for some architects and developers is that you have to comply with a number of standards and methods defined in the solution by Oracle. The upside is that a large set of analysis tooling and intelligence is pre-defined and available outside of the box.

Centralize logging with the ELK stack
Another option to consolidate logging is making use of non-Oracle solutions. Splunk comes to mind, however, for this situation the ELK stack might be more appropriate. The ELKS stack consists out of ElasticSearch, Logstash and Kibana complimented with Elastic beats and the standard REST API's.

The ELK stack provides a lot more flexibility to developers and administrators however requires more understanding of how to work with ELK. The below image shows a high level representation of the ELK stack in combination with Beats.

As you can see in the above image there is a reservation for a {Future}beat. This is also the place where you can deploy your own developed Beat, you can also use this method to do a direct REST API call to logstash or directly to Elasticsearch. When developing a logging for microservices it might be advisable to directly store the log data into elasticsearch from within the code of the microservice. This might result in a deployment as shown below where the ELK stack components, including Kibana for reporting and visualization are deployed in the Oracle Compute Cloud Service.

This will result in a solution where all log data is consolidated in Elasticsearch and you can use Kibana for analysis and visualization. You can see a screenshot from Kibana below.

The upside in using the ELK stack is that you will have full freedom and possibly more ease in developing more direct influence in integration. The downside is, you will need to do more yourself and need a deeper knowledge of your end-to-end technology (not sure if that is a real bad thing).

when you start developing an architecture for microservices you will need to have a fresh look on how you will do logging. You will have to understand the needs of both your business as well as your DevOps teams. Implementing logging should be done in a centralized fashion to ensure you have a good insight in the end-to-end business flow as well as all technical components.

The platform you select for this will depend on a number of factors. Both solutions outlined in the above post show you some of the benefits and some of the downsides. Selecting the right solution will require some serious investigation. Ensuring you take the time to make this decision will pay back over time and should not be taken lightly. 

No comments: