Where will we sprint?

There are a lot of blog posts about Spring and Spring Cloud out there. The Spring Blog for example, describes all Spring technologies in a good abstraction level. So why do we need another one? Well, many blogs deal with various Spring technologies in isolation. However in the Spring Cloud Sprint we will look at a high abstraction-level showcase which brings together multiple Spring technologies: Cloud Config, Eureka, Sleuth/Zipkin, Hystrix, Feign, Ribbon and Data Rest. We will build a basic spring cloud infrastructure around two basic microservices, direct-flight-connection and direct-flight-cost, as well as a simple front-end which aggregates the information provided by the microservices. Wherever deeper insights into the technologies are necessary, I provide external links.

Before we start coding let’s have a look at technical prerequisites, the “big picture” and the technologies employed. I would like to thank Johannes Dilli and Christian Schaible for helping me to build up the showcase. Less “blah”? Jump directly to the coding part →

Note: You can find the complete source on GitHub under https://github.com/MichiBe/Spring-Cloud-Sprint


Technical Prerequisites

For this showcase, we will use Java 8, Maven and Docker. Docker is necessary to easily setup two databases MongoDB (community edition) and PostgreSQL. If you have them already installed you won’t need Docker. You can use any IDE, but I recommend using the Eclipse-based distribution for Spring: Spring Tool Suite. It offers some really handy features when developing with Spring, especially with Spring Boot.


The “Big Picture”

Big Picture of the Showcase

We have the two microservices: direct-flight-connection-service and direct-flight-cost-service. The direct-flight-connection-service manages the time schedule of direct flights from one place to another. These data are stored in a MongoDB database. The direct-flight-cost-service stores the cost for a direct flight in a PostgreSQL database. To connect the microservices with the databases we will use Spring Data Rest, which is included in the Spring Data Project. The other frameworks we are going to use are included in the Spring Cloud Project.

The front-end (see picture below) will aggregate the flight connections and the flight costs. Fallback handling based on Hystrix will ensure the operability of the front-end, even on back-end crashes. If the direct-flight-connection-service is unreachable or the request fails, an empty flight list will be returned. If the communication with the direct-flight-cost-service fails, costs from previous calls are cached, used as a fallback and displayed with a tilde (~). In case the fallback is being used and no cached data is available, we will display the cost with “not available”.

Flight Aggregation Front-End

Libraries and Technologies

Spring Cloud

Spring Cloud is based on Spring Boot and provides a set of libraries to develop cloud native applications. You can find a good overview of all the sub projects, which make up the overall Spring Cloud project on the official Spring Cloud website. Most of the technologies used for the showcase come from the Spring Cloud Netflix stack. For a deeper theoretical insight, check out the Spring Cloud Documentation. Based on this documentation I will offer a summary of the necessary technologies.

Spring Cloud Config

The Spring Cloud Config server enables a centralized management of configurations for the applications connected to it. The applications will retrieve their properties on startup from the Cloud Config server. The server itself gets its properties from several sources e.g. from file system or git repository. More about Cloud Config →   Jump to the implementation →

Spring Cloud Netflix

EUREKA

We will use Eureka for service discovery. Eureka is comparable to a good old phone book, where you can find the telephone numbers of other people you want to call. The instances of the two microservices direct-flight-connection and direct-flight-cost will be registered with a Eureka instance with their host, port and other meta-data. If any other application e.g. the front-end needs to communicate with one of the microservices, it just has to ask Eureka for the location to establish the connection.

How does this work? The clients have to register themselves with Eureka. Afterwards they have to renew their status by sending an heartbeat at regular intervals. If the heartbeat of one Eureka client is missing, the Eureka server marks the instance in its registry as “DOWN”. On a normal shutdown of a client, it has to send a cancel request to get removed from the registry. To lookup other services, the clients have to fetch the registry. It will be cached locally on every client and renewed periodically. For visual feedback of the service statuses, Eureka ships with a very simple but useful dashboard. More about Eureka →   More about Eureka in General →   Jump to the implementation →

Overview Eureka Client Communication
HYSTRIX

In a distributed system, some instances may be unreachable. If there is no fault tolerant way to handle the absence of an instance, this can cause cascading problems. In the worst case the request will be repeated every few seconds and the failure has to be transmitted through multiple instances to the front-end. The failed instance is given no time to recover and usually it will exacerbate the situation. This is where Hystrix comes into play. Hystrix is an implementation of the circuit breaker pattern and observes the client calls. If a specific amount of calls fail, Hystrix opens the circuit by enabling fallback handling. For example, another client may be requested for necessary data or just simple default values will be returned. Only a few calls will reach the original targeted client to check its availability. Therefore the client has only a minimum amount of network load and can recover in the meantime. If a specific number of these “control-requests” succeed, the circuit will be closed again and everything will work as before. More about Circuit Breaker Pattern →   Jump to the implementation →

Overview Hystrix Behavior
RIBBON

We will use Ribbon as a client-side load balancer. The client side solution differs from the server side solution, in that the clients are responsible for distributing requests on their own. The distribution strategy can vary. For example, the available back-end instances may be picked either randomly, round robin, response-time based or zone and availability based. The server locations of the potential instances may be hard coded in a property file, but usually Ribbon will be be integrated with Eureka to get the available instances automatically. In our case Ribbon will be used in an implicit manner, because Feign uses Ribbon already under the hood. More about Ribbon in General →   Jump to the implementation →

FEIGN

In exchange for the Spring RestTemplate you can use Feign as a rest client. Feign is more declarative and therefore a little bit easier to use than the RestTemplate. Feign already includes the client-side load balancing solution Ribbon. As we are already using Hystrix, Feign will recognize its availability on the classpath and will wrap all methods with the circuit breaker. More about Feign → Jump to the implementation →

Spring Cloud Sleuth

Sleuth is a solution to trace requests in a distributed system like a microservice environment. Roughly speaking, it enriches log statements with a trace id and a span id. The trace id is the same for one request throughout the whole distributed system. The span id represents a basic unit of work within the microservice, for example searching for the cost of a flight. To visualize this information you can use Zipkin. The applications using Sleuth can push their span ids via HTTP, Kafka or Scribe to the Zipkin server. More about Spring Cloud Sleuth →   Jump to the implementation →

Overview Sleuth Log Tracing

Spring Data

As mentioned earlier, not all libraries used in this blog post are based on the Spring Cloud Project. Some important modules used in the database layer (between the database and the two microservices) come from the Spring Data Project. This project aims to offer a convenient and consistent way for developers to access and query different kind of databases, like SQL databases, document oriented databases or even map-reduce frameworks. In this blog post we are using three out of more than 15 db technology specific modules listed here.

Spring Data MongoDB

Integration with the document oriented database MongoDB. See all provided features →  Read full documentation →

Spring Data JPA

Integration with databases accessed via JPA. See all provided features →   Read full documentation →

Spring Data Rest

This module builds on top of Spring Data repositories like Spring Data MongoDB or Spring Data JPA and enables a REST based hypermedia access to the underlying database. For collections it offers the methods GET and POST; for single entities the methods GET, PUT, PATCH and DELETE are supported. By default the presentation format of the data is JSON-HAL. It is also possible to enable pagination and sorting of resources. More about Spring Data Rest →


Coding

After this short theoretical excursion let us start coding or to be more precisely let’s start with configuration, because of the ease of Spring the coding effort for this showcase is minimal. An easy way to setup Spring Boot based projects is to use the Spring Initializr. You just have to choose a project type (we will choose Maven), the desired version of Spring Boot (we will use 1.5.4) and fill in the artifact coordinates. Afterwards you can start to choose the dependencies you need for your specific project. Based on this information Initializr will generate a Maven project with all necessary Maven dependencies and provide it as a downloadable zip for you.

Spring Boot Initializr

Before we can start with the two microservices flight-connection-service and flight-cost-service we need to set up the infrastructure components: The config server, the service discovery and the log tracing.

Config-Server (Cloud Config)

Setup

Let’s start with the Cloud Config server. In Initializr we need to add the following dependencies:

Most important we need to add the Config Server dependency which brings all the functionality of the Spring Config Server. As mentioned in the first chapter, we will use Sleuth to enrich our logs with trace ids, so we have to include it as dependency as well. The last dependency is the Actuator, which is necessary to expose JMX and HTTP endpoints for monitoring and for application management. If you generate and download the project it should result in a typical maven styled folder layout. Additionally, Initializr creates a maven wrapper so you don’t have to install Maven explicitly.

In the application.properties or application.yml file inside the resources folder we have to add the following properties:

application.yml of Config Server

First we define the port for the config server. Afterwards we determine that the config server should run in the native mode, which means that it will load the config files from the classpath or local file system instead of a git repository. This way you dont’t need to setup any remote git repository. By adding search locations to the native profile as introduced with the last property, we will tell the config server where it will find the config files for the different applications.

In this case we will put them under the folder src/main/resources/configs. The property files have to be named like the application using them, but we will add the specific files successively when we need them. So far we will just create one property file (src/main/resources/configs/application.yml) which is global, inheriting its properties to all specific property files we will add later. Inside this master property file we will define where Eureka and Zipkin will be found and we tell Cloud Config that it should not override system properties made in the specific microservices.

global application.yml for the applications managed by the Config Server

Implementation

Initializr has already created the main class for the Spring Boot application. To make it a config server just add @EnableConfigServer in addition to the @SpringBootApplication annotation.

Get it to run

That’s it. Our first component (the Config Server) is ready to start. Just run the main method and you should reach the configurations by using the application name and the profile name as path parameters: http://localhost:8888/anyapp/anyprofile. To get the global configurations you can use any non-existing application name and any profile name. If a file for the given application and the given profile exists, it will be merged with the global properties. More on this later.

 

Service-Discovery (Eureka)

Setup

Let’s build a new project for service discovery using Initializr again. We will add the already known dependencies Actuator and Sleuth and two additional for Config Client and Eureka Server. The Config Client is necessary to retrieve the configurations from the Config Server. The Eureka Server dependency includes the whole Eureka functionality.

Since we will manage the specific service configurations inside the Config Server, the applications themselves only need to offer the name of the application (necessary for Eureka and Config Server) and the URI to the Config Server inside their own property files.

application.yml of the Service-Discovery

The main config file for service-discovery has to be created on the Config Server side with the name equal to the property value of spring.application.name (src/main/resources/configs/service-discovery.yml). There we need to define the port, the hostname and state that this server doesn’t have to register and fetch anything from Eureka because it is the Eureka server itself.

service-discovery.yml located in the Config Server

Implementation

To enable Eureka just add the annotation @EnableEurekaServer to the main method. This will integrate the Eureka functionality to the underlying Tomcat server.

Get it to run

Now just restart the Config Server and afterwards the Eureka Server by running the main classes. The boot order is essential because Eureka needs to retrieve its configurations from the already running Config Server. To access the dashboard of Eureka go to http://localhost:8761/. To see how the global properties are merged within the specific properties of service-discovery go to http://localhost:8888/service-discovery/anyprofile

 

Log-Tracing (Zipkin)

Setup

The last infrastructure component we need is the Zipkin Server where the microservices can push their traces to. As usual we will build the application using Initializr. The Zipkin Stream dependency includes the functionality to make the application a Zipkin Server or a Zipkin Stream Server. For visualization of the captured data we need the Zipkin UI dependency.

In the local application.yml add the name of the application and the location of the Config Server as described already for Service Discovery. On the Config Server side, add the YAML file src/main/resources/configs/log-tracing.yml and configure the port and the Zipkin storage type. We use mem for memory so we don’t need any database, but be aware that the traces will be lost after server shut down. Additionally we will disable sleuth.stream functionality, so the traces will be transmitted via REST and no Message Queue is necessary.

log-tracing.yml located in the Config Server

Implementation

To make the Spring Boot application a Zipkin server you only have to add the @EnableZipkinServer annotation to the @SpringBootApplication.

Get it to run

Restart the Config Server again and start the Log Tracing (Zipkin) server. It should be reachable under http://localhost:9411/. We will come back to the Zipkin front-end in the last chapter when we trace a request throughout our two microservices.

 

Direct-Flight-Connection-Service

Setup

As mentioned in the introduction, the direct-flight-connection-service will work with MongoDB. If you have MongoDB already installed, just use it. If not, you can download the image and run the MongoDB Docker container via the command: docker run -d -p 27017:27017 mongo.

The microservice itself will just store flight connections and offer them via a REST API. It will use the previously mentioned dependencies for Actuator, Sleuth and ConfigClient. Additionally, it needs Eureka Discovery to register itself and fetch other services from Eureka, Zipkin Client to push the traces to the Zipkin server, MongoDB for the persistence layer, and last but not least the RestRepository dependency to enable a REST based interface for the MongoDB repository.

We need one more dependency not included so far, Jackson, to serialize and deserialize the new Java 8 classes like Instant.


Prepare the local application.yml as we did for the other applications. On the config server side we create src/main/resources/configs/direct-flight-connection-service.yml as follows:

direct-flight-connection-service.yml located in the Config Server

Implementation


The business model of the direct-flight-connection-service consists of one simple class DirectFlightConnection which you can see below. To auto-generate the id and the instantOfCreation field we annotate these fields with @Id and with @CreatedDate. Both annotations are part of Spring Data, which is bundled with the Spring MongoDB dependency we declared in the setup. The @CreatedDate annotation won’t work so far. It needs the @EnableMongoAuditing annotation on a Spring configuration class. As we don’t use any extra configuration classes for simplicity, we just annotate the main class with this annotation.

DirectFlighConnection.java

To access the underlaying database, let’s create a DirectFlightConnectionRepository interface which extends MongoRepository, which itself is a PagingAndSortingRepository and therefore offers standard CRUD operations and additional pagination for the entities. Not enough magic for you? No problem, here is some more. We want to make this repository a REST repository, reachable from the outside in a RESTful way serving JSON-HAL with Hypermedia support. This sounds like a heavy task doesn’t it? Not with the Spring REST Repositories project. Only one additional annotation @RepositoryRestResource(collectionResourceRel = "directFlightConnections", path = "direct-flight-connections") is necessary. The path defines the resource path, where this entity will be offered. The collectionResourceRel defines how the collection of the entities to serve will be named in JSON-HAL. That’s it! From now on the stored DirectFlightConnections will be accessible via REST.

DirectFlightConnectionRepository.java

For testing purposes we need some sample data which will be generated on startup via the CommandLineRunner Bean inside the main application class which is shown below. Another necessary bean is the defaultSampler Bean, which defines the strategy used by Sleuth to export traces. If there is no Sampler Bean defined, nothing will be exported and you could only see the traces in the log. Because we are using Zipkin to store the traces we are going to use the AlwaysSampler, which exports every trace.

One annotation used at this class but not yet explained is the @EnableEurekaClient annotation. It will enable the integration with Eureka, registering, deregistering, fetching of the registry and so on. Of course it requires the client to know the Eureka host location, that we have already defined inside the global property file of the Config-Server.

DirectFlightConnectionServiceApplication.java

Get it to run

Start the service and verify its registration in Eureka by opening the dashboard and checking the entry under currently registered instances. The direct-flight-connection-service should be displayed as shown below.

Furthermore the direct-flight-connection-service REST entry point should be accessible under http://localhost:9080/. Thanks to the REST Repository a nice link based navigation will lead you throughout the application and its available resources. All direct flight connections should be accessible under http://localhost:9080/direct-flight-connections and should look like:

All Direct Flight Connections as JSON

To request a single connection use the HAL links given in the JSON above. If you are facing the problem that your web browser wants to download the file instead of displaying it, you can install an add-on, use a different browser such as Chrome instead, or you could install Postman, which offers an easy way to build API requests.

 

Direct-Flight-Cost-Service

Setup

The direct-flight-cost-service will us a PostgreSQL database. If you already have it installed, just use it. If not we need to download and install it. Easiest way to get and run the database is once again via Docker with the following command: docker run -p 5432:5432 -e POSTGRES_PASSWORD=rootpasswd -d postgres.

For the direct-flight-cost-service we nearly need the same dependencies as in the direct-flight-connection-service, but instead of MongoDB we need the PostgreSQL dependency in combination with JPA. Don’t forget to manually add the jackson-datatype-jsr310 as well.

The local application.yml should look like the following snippet.

application.yml located in the Direct-Flight-Cost-Sevice

On the Config Server side, add a new yaml file src/main/resources/configs/direct-flight-cost-service.yml as listed below. We have to register a JDBC datasource. With create-drop we tell Spring Boot to create the database on startup and drop it afterwards, which is sufficient for our demo purpose.

direct-flight-cost-service.yml located in the Config-Server

Implementation


The business model is straightforward. It consists of one simple entity class named DirectFlightCost decorated with typical JPA annotations. The only Spring specific part here, is the registration of the AuditingEntityListener to enable auto-generation of the date via the annotation @CreatedDate.

DirectFlightCost.java

Equivalent to the MongoRepository we used in the direct-flight-connection-service previously, this time we need to create a JpaRepository to access the underlying PostgreSQL database. Requesting all DirectFlightCosts or single DirectFlightCosts won’t be enough, so we need two additional custom query methods to search the REST repository via search parameters. The first one enables searching for a specific airline and the other one for an airline, departure- and arrivalAirport. The URL for these searches will be built from the method name and will look like:

It is important that all specified parameters are provided in the request URL, otherwise the framework will not find any resource and answer with status code 404.

DirectFlightCostRepository.java

Similar to the DirectFlightConnectionServiceApplication class the DirectFlightCostServiceApplication class defines a defaulSampler and a CommandLineRunner bean. Adding entries with the same airline, departureAirport and arrivalAirport should fail due to the unique constraint inside the entity class. This ensures, that the repository search for airline departure- and arrival-airport will return a single DirectFlightCost.

DirectFlightCostServiceApplication.java

Get it to run

After starting the service it should be displayed in Eureka and it’s API should be accessible under http://localhost:8085/. You can also test one of the custom resource queries we defined above. (e.g. http://localhost:8085/direct-flight-costs/search/findByAirline?airline=Lufthansa)

 

Flight-Aggregation-Frontend

So far we have two independent services. Now we need a component using them. Usually there would be another application like a front-end consuming the services via some kind of gateway service like Zuul or Spring Cloud Gateway that routes the requests to the different back-ends. For convenience we will make the front-end server consuming the services directly with the help of the declarative REST client, Feign combined with Hystrix as a circuit breaker and Ribbon as a client side load balancer. The name Flight-Aggregation-Frontend is a little bit missleading because (for reasons of simplicity) we will make this application display the Hystrix dashboard as well.

Setup

Go to Initializer again and select the dependencies shown below. With most of them you should be already familiar. New dependencies are Hystrix, Thymeleaf (template engine to render html files), Feign and HATEOAS (enables to work with Hypermedia and their implementations like JSON-HAL, as we have used in the back-end services).

The configurations inside the yml files, which you can see below, just contain the minimal necessary parameters to connect to the config server and to define a port.

application.yml for Flight-Aggregation-Frontend
flight-aggregation-frontend.yml located in the Config-Server

Implementation


We will start with the implementation of the model classes. Because we are using Feign as REST client, it is necessary to have classes representing the JSON response of the back-end services. The easiest way is to use the model of the back-end services directly or to copy the classes and remove the database specific syntax in order to get plain model classes. Maybe this approach will end up in duplicate code, but it will decouple the front-end from the back-end and we can slightly customize the model if necessary as we do now. First of all we make the cost field a string so we can use the tilde prefix (~) to mark an approximation in case of fallback. Secondly, we change all Instants to Dates, what prevents us from problems with the JSON deserialization of Java 8 Instant. Next to the classes representing the back-end responses (DirectFlightConnection, DirectFlightCost), we need one more class (UiAggregatedFlight) holding the aggregated information displayed in the front-end.

UiAggregatedFlight.java

Now let’s create the REST Feign clients to consume the back-end services. We don’t need to create a controller class using the RestTemplate or any other rest client to consume the back-ends. This will all be handled for us by the Feign client. What we need is a simple interface (DirectFlightConnectionServiceClient) annotated with @FeignClient(...) and declaring bodyless methods annotated with @RequestMapping like in a typical Spring REST Controller. The @FeignClient annotation needs the name of the back-end service (should be the same name used in Eureka) and in our case a fallback class which implements the interface returning any reasonable alternative data. We simply deliver an empty list if the direct-flight-connection-service is not reachable.

DirectFlightConnectionService- Client.java & -Fallback.java

The DirectFlightCostClient is straight forward. We need one method declaration to find the costs via our custom query findByAirlineAndDepartureAirportAndArrivalAirport. The fallback is slightly more complex. It should be able to return the last costs requested and if no costs are available it should return “not available”. To remember the costs of flight connections we declare a map in the fallback class which will be filled with the cost per connection returned by the direct-flight-cost-service.

DirectFlightCostServiceClient.java
DirectFlightCostServiceClientFallback.java

As mentioned before we are using Thymeleaf as template engine. The templates are simply HTML files enriched with some Thymeleaf markup. The template you need is listed below. Just add it to the default location under resources/templates. The style sheet and  the necessary scripts are referenced via CDN. If you want the background image for the jumbotron you should place the image under resources/static/images.

aggregated-flights.html

To serve the parsed template file we need to create a View Controller (AggregatedFlightsViewController.java). We inject the service clients and use them to get all necessary data for the user interface. First we request the connections and for every connection we request the respective cost. Not very performant and therefore not a solution for a production ready scenario, but efficient enough for this showcase. Every time we get an answer from the cost client we store the result in its fallback implementation, so in case of failure we can offer the last known cost.

AggregatedFlightsViewController.java

The application class needs a few annotations to enable Eureka integration, Feign, Hypermedia support, Hystrix and its dashboard.

FlightAggregationFrontendApplication.java

Get it to run

Restart the config server, start the frontend server an go to localhost:8080. You should see a HTML page like shown in the first chapter. If you have stuck precisely to the instructions until now, you may wonder why there is one row with unavailable cost displayed. If we have a look into the log, we will see one Feign exception with the status 404, because we are asking for costs of a connection (airline: Lufthansa, departureAirport: Munich, arrivalAirport: New York) we haven’t stored in the direct-flight-cost-service. So what you see is the fallback behavior of Feign in combination with Hystrix.


Wrapping Up

Let’s go through the infrastructure we just built and see what the key benefits of the used technologies are. Before we begin, make sure all applications are started.

“Monitoring” with help of  Actuator

We added the Spring Boot Actuator dependency to every component. So let’s see what information it provides us. Beside the possibility to create custom endpoints, there are a few production ready endpoints offered by default. They are listed in the reference guide. Most of them are secured by default, so we can’t just use them. We have to disable security or to make the endpoints insensitive, so they won’t need authentication anymore. To make them all insensitive add the property endpoints.sensitive=false e.g. to the YAML of the Config-Server and restart it. Now we can randomly investigate the endpoints. For example open the URL http://localhost:8888/metrics to get general metrics like used memory, heap etc. or go to http://localhost:8888/trace  to get the information about the last 100 HTTP requests.

The Actuator endpoints are a good starting point for a centralized application monitoring. If you are interested in monitoring, you can have a look at the Spring Boot Admin project, which is an AngularJs front-end for monitoring based on the Actuator endpoints.

Fallback Behaviour

First of all we will analyze how the app reacts if one of the back-ends fails. To get a better insight into what Hystrix does, open its dashboard under http://localhost:8080/hystrix.

Enter the URL http://localhost:8080/hystrix.stream in the textfield and press the Monitor Stream button or just directly navigate to http://localhost:8080/hystrix/monitor?stream=%20http%3A%2F%2Flocalhost%3A8080%2Fhystrix.stream and you should see a UI similar to the following screenshot. I captured it while requesting the front-end when both back-ends have been available. You can see one request for the flight connections and for each of them one request to the flight cost service (3 successful, 1 failing due to an incomplete cost database).

Hystrix Dashboard

Now stop the direct-flight-cost-service and reload the front-end. You should see that the frontend is using the old cost information but prefixed with a tilde (~). If costs have never been available their status will stay “not available”.

Aggregated Flight Frontend with fallback for direct-flight-cost-service

Reload the flight-aggregation-frontend multiple times and observe the Hystrix dashboard. After some reloads the cost requests (displayed on the right side at the screenshot) will open the circuit and the short circuit count will rise (blue number). Only a few requests will reach the back-end and check for availability (red color). If you restart the cost service, and continuing reloading the front-end, the circuit will close again (this may take some time) and all requests will be processed normally. As a consequence, the true costs will be displayed again.

Trace a Request

Let’s analyze the traces of the past requests. For this purpose go to the Zipkin UI http://localhost:9411. Select one of the three applications (e.g. direct-flight-connection-service) and filter for specific traces. For a better overview I restarted the Zipkin server and reloaded the flight-aggregation-frontend once, so I got only one trace as you can see in the screenshot below.

Zipkin Trace View

As you learned in the theoretical part about Sleuth, a trace consists of multiple spans which represent single units of work. You can see these spans and their durations if you select the trace. By clicking on a single span you get an even more detailed insight into what happened at what time and how long it took. The information offered by Zipkin is very valuable to detect performance bottlenecks in your distributed microservice environment, or simply to get an understanding how the microservices are connected to each other.

Zipkin Detail View of a Span

Client Side Load Balancing

Last but not least, we want to see if the client side load balancing with Ribbon is working. Therefore we need a minimum of two instances of one of the back-end services. This can be achieved by specifying a different server.port property for both instances. If you are using the Spring Tool Suite for Eclipse this is straight forward. In the Boot dashboard, just right click to the direct-flight-cost-service and select Duplicate Config. Then open the config, add a new property server.port=9000 and press Apply. Afterwards start this new config. Ensure that both service instances are registered at Eureka. Now reload the front-end. Verify that both instances are requested in round robin fashion by checking their logs or by opening the detail view of two successive spans in Zipkin and check their ports in the Address column (see also screenshot above).


Where to go from here

This blogpost does not cover all aspects of Spring Cloud. Therefore it is the first part of a series of planned blogposts about Spring Cloud. Furthermore we want to bring this showcase with its used technologies into Pivotal Cloud Foundry. Because blogposts usually deal with controlled environments they don’t face real world problems and corner cases. This also applies to the current one. That’s why we will port the technologies used into a more business grounded scenario introduced with Educama, which is a showcase demonstrating Case Management and Business Process Management for transport logistics. Any problems we are facing on applying the technologies to Educama will be the source for further blogposts. So stay tuned!

Leave a Comment

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close