February 6, 2018

Scaling and maintaining apps manually can be a pain in the ass. You have to manage performance issues on your own, e.g. for availability requirements to ensure zero downtime etc. What could simplify this DevOps tasks in the cloud?

Our environment

In our particular case we have a legacy JEE application to provide fuel prices information for our iOS and Android app.  All components of our backend environment were hosted on multiple EC2 instances on AWS. The environment consisted of three components.

  • Postgres as a rational database
  • GlassFish as JEE application server
  • Elastic Load Balancer

Doing it the painful manual way

We had the load balancer for two reasons in front of our GlassFish instances. The first one is to scale our JEE application. We can do this quite straightforward because the application is stateless and therefore multiple instances can run at the same time. In case of a monitoring alert, e.g due to too long response times from our JEE application, we had to add a new GlassFish instance manually. The load balancer distributes the incoming HTTP requests to the connected GlassFish instances and the response time decreases.

The second reason is to rollout a new version of the JEE application without downtime. Again, we took advantage of the ability to use multiple instances simultaneously. We were running the old and new application versions concurrently during the rollout. We had to do this all the manual way.

It was important for us to have a concept to scale. When you develop for example a business application, it’s easier to predict how many people will use it. For a mobile app it is harder to predict because they are distributed via the app and Play Store. Those stores promote newcomer apps and your app can become very popular at the start, such as Pokémon GO.

What we were seeking

The feature set of the application was more or less complete, so no major changes for the next years. We spent a lot of time to optimize the application so the performance of the JEE application was quite good. For our business case we don’t need multiple heavy machines. 1 core, 2 GB memory for the application and a relational database with 1 core and 1 GB memory is sufficient. But we had the following problems with the existing solution.

  • Scaling the GlassFish instances was still manually conducted. No one wants to take care of scaling an application on a public holiday. No one!!!
  • Rollout with 11 steps was manually as well. We had to spent at least one hour for each rollout.

We needed a solution to automate the rollout and scaling our JEE application. Finally, we discovered Beanstalk.

Why we moved to Elastic Beanstalk

Static and dynamic application behavior

Beanstalk has the same infrastructure components as we use. A load balancer at the front of our environment, which can be the gateway to your private network or the internet. The amount of connected instances is dynamic or static depending on the configurations. If you wanna have a dynamic amount of instances, you are able to scale over every metric you can think of, CPU, memory, IO, network requests, time and many more. Last but not least you can run a relational database as datastore. The datastore is provided via the RDS service.

When running some stress tests for instance, we figured out that the CPU was the first limited resource for our application. We had to become more flexible and Beanstalk helped us out.

Zero Downtime with batch deployment

Beanstalk provides multiple ways to deploy a new version of your application with zero down time. One of these is a rolling application deployment with or without additional batch. This means if the environment has four instances and the batch size is two, it will start two instances first. Temporarily, there are six running instances. After the additional instances are added successfully to the load balancer two old instances will be stopped. This procedure is performed until all instances are replaced. To use an additional batch is a good way to deploy a new version of your application when the environment is under load. Without an additional batch, the capacity of the environment is reduced during the deployment time.

Region restrictions

All Beanstalk resources of one environment are located only in one region. This means you cannot run an instance in Frankfurt and one in Ohio, but you can run those in multiple availability zones for one region. Beanstalk will evenly distribute the instances. Also the RDS shadow instance can run in another availability zone.

Docker

Our managed Beanstalk EC2 instance is using Docker to split the Beanstalk stack from our application stack. Our application stack is the GlassFish JEE server with an installed WAR file. Beanstalk provides some predefined Docker images, like GlassFish. We are using this image but you can also use your own. Put your application into a Docker image and you can run it in a Beanstalk environment.

Conclusion

If you want to scale and deploy for instance legacy JEE applications automatically, Beanstalk can be a great choice to enhance your daily DevOps business without changing one single line of code.

Leave a Comment

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close