History of servers

My beloved server,

hqdefaultIf you recognize yourself in the following description, it’s time to ask yourself the right questions, and start rethinking your IT/Career.

In the beginning we had Ops who were in love with their own premise servers hidden in dark basements. They loved watching their blinking leds in the dark, taking care of them, constantly upgrading them, and speaking about their beauty, impenetrability and Power.
They spent (lost) their life patching, reading logs, and watching monitoring graphs.
The problem is that people who are so in love with technology shouldn’t be working on relics in a dark basements, they are supposed to be inventing technology.

 

 

 

My beloved cluster,

illustration-cluster-768x540Lately Ops/DevOps have not been managing only one server at a time anymore, they have enormous number of machines to manage, they see this power as a group (a cluster). A single element in the cluster is meaningless, they have to monitor a group, the only thing that matter is the group.

Patches are applied with scripts on all machines at the same time.
Provisioning and recreating a unit became so easy, we don’t have to secure units anymore.
A machine which is not working as expected will be killed without warning. It’s the Cloud Computing era.
Nobody is talking about machine power anymore, machines are called nodes, when summed up they become power.

 

 

Where are we going ?

working-outside-768x433The answer is simple, in decades we used the most intelligent people to maintain services up, and we asked other people to innovate.
A bit weird, you’ll say. We were not hearing ideas from people who know what’s under the hood of technology. We were asking sellers and developers to invent the IT of the future.
Fortunately, in their basements and offices those Ops/Devops organised the future. They improved the way we manage our servers by developing Open Source projects or free softwares which will make the task of managing a server so easy it can be Automated !!!
No human intervention on machines is allowed anymore, cluster manage themselves, they adapt their power as needed.

The Ops/DevOps now have time to use the true power of what’s under the hood, to innovate and to make IT better.

 

Building a serverless service with Google Cloud Platform :

The context is simple, and can be adapted to many other use cases (everything exposing a webservice or web app and asynchronously processing data in parallel). In this case, it’s a service transcoding your videos and delivering them to your end users.
Your app doesn’t manage video/audio assets anymore, your users send/retrieve assets from a third party service.

image03

Constraints :

  1. High availability storage.
  2. World wide distribution.
  3. Secure upload/download.
  4. Transcoding should not be synchronous and have to work in parallel.
  5. The customer app will not at any time have to manage assets.
  6. The service cost should adapt to usage.
  7. The service should as cheap as possible.
  8. The service should scale fast when needed.

We tried to keep it simple and stupid :

  • Simple integration to customer apps
  • Simple user operations
  • Simple billing model
  • Simple REST API
  • Simple API key authentication

We based our service on 5 bricks from the Google Cloud Platform:

  1. App Engine : for hosting the API, it scales automatically, its fully managed by google
  2. Google Cloud SQL : Our DB model is relational, Google Cloud SQL is fully managed (backups, availability, auto shutdown, patches etc …)
  3. Google Cloud Storage : High availability, secure object storage
  4. Google task queue : a better fully managed queue service will be available soon.
  5. Google Container Engine : to manage and run our docker transcoders.
The service is based on 4 scenarios :

image07

User wants to upload an asset:

  1. Client side customer app asks server side for an upload url (sends filename, size or md5 of the file to upload)
  2. The request is forwarded to the service with customer authentication key
  3. Service stores the request informations on the DB and responds with a signed url with limited usage time and which can be used only with the specified file.
  4. The Web Service also notifies DKE of an upcoming charge to adapt the cluster size.

image02-768x521

User uploads file:

  1. Using the given signed url the user uploads his file directly to the storage.
  2. the Web Server is notified once the upload is completed
  3. Web server updates the asset status
  4. Web server adds a message to the queue

image06

Transcoding the asset :

  1. One of the docker workers pulls the message from the queue
  2. The worker retrieves the asset from the storage
  3. The worker processes the asset and stores the results in the storage
  4. The worker notifies the Web Service
  5. The web service updates the asset status

image05

Downloading an asset :

  1. The user requests a page containing an asset.
  2. The server requests a signed url for the asset.
  3. Web Service generates a signed url limited in time and forwards it to the customer app.
  4. The url is inserted in the customer html body response to the end user.
  5. The user browser retrieves the asset from the storage.

 

Advantages of this infrastructures :

Every brick of the infrastructure is managed by google, it’s automatically wired to their tools (centralized logging, stackdriver monitoring and alerting, appengine live debugging, appengine audit, backuping, high availability, scaling etc …).

Every tools adapts their price to the use, every element of the infrastructure can be shut down automatically when there is no request to the service. And the monthly cost drop drastically compared to a Iaas or On premise price.

Service is extremely fast since we use Google Cloud Platform network exclusively.

No more server management (bootstrap, patch etc), all the saved time is used to improve and evolve the app.

We have now time to evolve and innovate the app since all the infrastructure is autonomous.