Cloud Ready Applications

Days of traditional servers are counted. Flexibility of cloud platforms towards traditional datacenters is the root cause of the shift. For a developer, the question is no more where will be my application deployed but on which cloud platform. And designing an application in the cloud is more complex than on premise. As developers we must be ready for that, we must write code ready for the cloud. But, what is exactly a cloud ready application? Is it a one push action to Heroku? Yes, the Cloud can make life easier for the developer. But in order to unleash all its benefits you must design your code for the cloud. And this article aims to explain you the why and the how through 3 highlights of the shift of applicative architecture from the on-premise to the cloud world.

The fork lift time

Majority of experiences on the cloud begins like that: you carry your application like a pallet from the datacenter to the cloud. You are building an application and when comes the time to deploy it, flexibility of the cloud makes it the natural choice: startups see the benefit of no-upfront cost and larger companies see the benefit to externalize activity that is not a key differentiator. And, as developers, we see benefits too: easier to get an environment (at least in term of provision time) and smart automated stuff to deploy quickly your application as it is.
Lots of applications go to the cloud this way, with a fork lift approach, by pushing the artefact on that new kind of server.

Evil is in the detail when we try to copy/paste something

Things start to get harder when we have to go into details. A lot of configuration is required as with an on-premise server. But cloud environment comes with some hypothesis in mind. In brief you start to think “cloud is not designed for real enterprise application” or “On the cloud there are more Ops stuff than on premise”. More and more services exist, but more and more configuration has to be done.

First devops opportunities

Fork lift is not something to avoid. It brings some immediate value. It can be the opportunity to discover the cloud and to start pushing your application forward.
One important concept of the cloud is automation. Most of the time, cloud migration must come with an infrastructure as code approach. As a developer, it can just look like another operation action. But devops approach is much more than just infrastructure as code. In order to provide a seamless automation from code commit to code in production, dev and ops need to work together. Let’s see two examples of software architecture best practices that will make ops love your work and help them automate the installation.

Smart configuration

Configuration value, database password for the most common one are an old subject of synchronization between dev and ops. Let the ops automate their deployment and they will be very happy with their JSON config file! Things are not always so simple. For example as a developer you want to organize your code according to convention and in a way that help you work smoothly. Ops have the same need and they probably want to split you super JSON files into environment variables. Or they have their big file and totally ignore 80% of your file. The best way to build a smart configuration that fits with the automation process is to work with the Ops from the beginning, making the architecture, design and convention choice together.
Here are some questions that may help you segregate your configuration data in a way that may help you work with Ops:

  • Ask yourself what is the lifecycle of each piece of data that you name configuration. Do they change from one environment to the other? Do you like to let them change after seeing your program run (all tuning parameters should be in that case)?
  • If they don’t it is probably not configuration from an operation point of view. It can be business or development configuration but not configuration for runtime. Database storage can be a choice for business configuration (configuration whom ops don’t care).
  • For configuration data identified, externalize them in a way that they are readable and easily writable in the package of your application. Documentation and convention are key for that.
  • By default choose files to store configuration and make their location customizable by an environment variable.
Health check URL

Another improvement of your application is its ability to be monitored. With one to three servers by application in a plain old deployment, the system administrator knows the application, its dependencies, how to analyze the logs to check its state. In a cloud world, the ops job has changed. Through automation one person can administer hundreds of servers. Your application is necessarily unknown and diagnostics have to be done through pre-defined metrics. In a near future, diagnostic should be able to be done partly by artificial intelligence. In order to push your application forward that way you have to implement heath check URL. A health check URL is a technical URL, used only for an administration purpose that returns a 200 HTTP code if all is fine, and another code in case of problems. I recommend you to put in the payload detailed results of the test performed in order to ease the diagnostic. A health check URL’s sample could be a simple program that make a dummy query in the database and ping the required dependent web services. This provide very useful informations for the operations.

Such kind of deployment brings flexibility but it is still a copy of what you do on premise. Your granularity of flexibility is a server that you configure and run. Let’s go further.

Micro-services in micro-slice

What is the next step? Previous paragraph describes an IaaS approach, this one is about more or less CaaS. Why more or less? When I talked to an ops colleague he told me that the kind of container is a purely design choice. The way you use it has much more impact. This paragraph is about using several containers that you coordinate together.
As a software architect I am facing the micro-service architecture for one year. Deploying a monolith in a container and log every time to inspect its log is the first reflex. And it has less impact on my code than splitting an application in four micro-services that run in containers which are destroyed and recreated in case of error. An application split in micro-services has to be designed for location transparency and design for failure.

Location transparency

Deploying an application as a set of micro-services has a real advantage on a cloud platform because it allows to scale and adjust the capacity for each service (if the services are scalable). Cloud platform are designed to provision a lot of small machines. Even if big or dedicated servers are available, high prices reflect that the cloud is not designed for that. It allows cloud providers to far better optimize their physical hardware by combining lots of small workloads. To do that, you are encouraged to not have the ability to choose where your virtual machine will be executed. Maybe it can be moved during a run. As a developer it means that your code should be transparent to the location. Clusters have required to have a stateless code. Cloud platform requires to be stateless and still less demanding. For example, no assumption on latencies, on attached file system and so on. To reach that goal, consider that the only assumption is to have access to services through the network. If you want to provide a state semantic you either have to use a service that propose it (recommended choice) or implement a replication mechanism with all the associated problem (only if you’re AWS or Google).

Design for failure

At large scale, so in a cloud environment, everything fails all the time says Wermer Voegels, CTO of Amazon. So, the more micro-services you use, the higher probability you have that a call to a service fails or times out. Your code has to take care of that. As developers we have to design for failure, in particular when calling other services. Several patterns exist for such design and it will be too long to describe them here. It will be the subject of a further article.

Prepare your application to restart every time

In order to fully benefit
Granularity in the cloud is about smaller machines. It also about smaller time frames. In order to fully benefit the pay what you use of the cloud, you should be able to stop your application when you don’t use it. When only two persons use your application, it should be able to run on one process. When 10,000 persons use the application, it should be able to work as well on 1,000 processes. Requests will come the right way thanks to the cloud provider’s load balancer, but your code has to be able to stop and start smoothly and quickly.

So designing in micro-services is a good starting point for the cloud, but it has to be completed with other requirements: design for failure and frequent restarts.

Serverless architecture

So, my application is split into micro-services, they can be started, stopped upon demand, be resilient to the crash of other services. What do we need now? We need to benefit from the high level services provided by the platform and we need to go a step further in term of scalability.

Developing on platforms

Cloud platforms provide high order services that can automate some tasks like database backups, data migration. But it comes with some drawbacks. You have to adapt your code to the platform. But how to develop on such platform? Pushing directly your code into the platform is appealing, but how will you debug? When do you do the transition from a local process to the platform? Most of the platforms foundations are open source projects that can be run locally. And this is from my point of view a must have. Every project should be able to run locally in one command. It allows developers to test quickly and to debug easily. Then, plan to deploy on the cloud platform at the early stage of your continuous deployment process. Containers are gaining into popularity quickly as they allow to run the same runtime locally and on the cloud. But latest high level services are often closed-source and not available as Docker images. I personally prefer to accept a little difference between my development runtime and the target runtime rather than being limited in my runtime choice. For the debugging purpose, cloud logging services are your best friends. But they are limited, and building custom methodology and tooling will probably be one of your first job as a cloud platform developer.

Developping on serverless architecture

The second is still more flexibility. AWS has a new offer called AWS lambda. The promise is really pay per use. No more pricing relative to the uptime. You can now pay only according to the number of requests you are processing. It is very interesting for a startup which can not have up-front investment. The scalability is far better because it is billed at the request. Without customer, you have no bill. But it can be a good option for a department application which is used only several days a year. But what does it mean for me as a developer? Server has completely disappeared. You have to code events-handlers, whose messages can be HTTP requests through an API, messages or updates in a database. The requests are totally independent. The idea of cache for example has to be handled totally differently. As a platform you will have the same problem as described before: which should be your local platform? And finally how do your organize your code? Business logic is spread in the assembling of event-handlers. Is the approach with promises the best option? How do you separate low level logic of the events?
We are at the very early age of what can be a new wave of software architecture.
Some frameworks like serverless or node-lambda start to emerge. We are at the very early age of what can be a new wave of software architecture. At OCTO, we are still exploring this ecosystem. My first recommendation is to deeply invest on code organization. Components should be above all designed to simplify the complexity. Deploying on a lambda requires, besides, that the size is small enough to be loaded quickly at startup. Finding the good split into lambda is so my first priority. Then, I need to install the right frameworks in order to build a strong development platform with debugging capability both locally and on the cloud. Finally, coupling with proprietary platforms should not be escaped but managed with the correct patterns. Serverless architecture is also the promise of big changes and big opportunities.

Conclusion

To conclude, in order to take the full benefit of the cloud, design of your application has to be adapted. Lot of today best practices are just must-have practices in the cloud. Micro-service approach allows to go one step further to benefit from the cloud. Serverless architecture is the more advanced scenario and requires to rethink globally the way to organize code. What will replace the plain old layered architecture? Which best practices will emerge from promises and events handling? The future is to be written, but it is one of our core challenge for the next few years.

Leave a Reply

Your email address will not be published. Required fields are marked *


This form is protected by Google Recaptcha