History of servers
My beloved server,
If you recognize yourself in the following description, it’s time to ask yourself the right questions, and start rethinking your IT/Career.
In the beginning we had Ops who were in love with their own premise servers hidden in dark basements. They loved watching their blinking leds in the dark, taking care of them, constantly upgrading them, and speaking about their beauty, impenetrability and Power.
They spent (lost) their life patching, reading logs, and watching monitoring graphs.
The problem is that people who are so in love with technology shouldn’t be working on relics in a dark basements, they are supposed to be inventing technology.
My beloved cluster,
Lately Ops/DevOps have not been managing only one server at a time anymore, they have enormous number of machines to manage, they see this power as a group (a cluster). A single element in the cluster is meaningless, they have to monitor a group, the only thing that matter is the group.
Patches are applied with scripts on all machines at the same time.
Provisioning and recreating a unit became so easy, we don’t have to secure units anymore.
A machine which is not working as expected will be killed without warning. It’s the Cloud Computing era.
Nobody is talking about machine power anymore, machines are called nodes, when summed up they become power.
Where are we going ?
The answer is simple, in decades we used the most intelligent people to maintain services up, and we asked other people to innovate.
A bit weird, you’ll say. We were not hearing ideas from people who know what’s under the hood of technology. We were asking sellers and developers to invent the IT of the future.
Fortunately, in their basements and offices those Ops/Devops organised the future. They improved the way we manage our servers by developing Open Source projects or free softwares which will make the task of managing a server so easy it can be Automated !!!
No human intervention on machines is allowed anymore, cluster manage themselves, they adapt their power as needed.
The Ops/DevOps now have time to use the true power of what’s under the hood, to innovate and to make IT better.
Building a serverless service with Google Cloud Platform :
The context is simple, and can be adapted to many other use cases (everything exposing a webservice or web app and asynchronously processing data in parallel). In this case, it’s a service transcoding your videos and delivering them to your end users.
Your app doesn’t manage video/audio assets anymore, your users send/retrieve assets from a third party service.
- High availability storage.
- World wide distribution.
- Secure upload/download.
- Transcoding should not be synchronous and have to work in parallel.
- The customer app will not at any time have to manage assets.
- The service cost should adapt to usage.
- The service should as cheap as possible.
- The service should scale fast when needed.
We tried to keep it simple and stupid :
- Simple integration to customer apps
- Simple user operations
- Simple billing model
- Simple REST API
- Simple API key authentication
We based our service on 5 bricks from the Google Cloud Platform:
- App Engine : for hosting the API, it scales automatically, its fully managed by google
- Google Cloud SQL : Our DB model is relational, Google Cloud SQL is fully managed (backups, availability, auto shutdown, patches etc …)
- Google Cloud Storage : High availability, secure object storage
- Google task queue : a better fully managed queue service will be available soon.
- Google Container Engine : to manage and run our docker transcoders.
The service is based on 4 scenarios :
User wants to upload an asset:
- Client side customer app asks server side for an upload url (sends filename, size or md5 of the file to upload)
- The request is forwarded to the service with customer authentication key
- Service stores the request informations on the DB and responds with a signed url with limited usage time and which can be used only with the specified file.
- The Web Service also notifies DKE of an upcoming charge to adapt the cluster size.
User uploads file:
- Using the given signed url the user uploads his file directly to the storage.
- the Web Server is notified once the upload is completed
- Web server updates the asset status
- Web server adds a message to the queue
Transcoding the asset :
- One of the docker workers pulls the message from the queue
- The worker retrieves the asset from the storage
- The worker processes the asset and stores the results in the storage
- The worker notifies the Web Service
- The web service updates the asset status
Downloading an asset :
- The user requests a page containing an asset.
- The server requests a signed url for the asset.
- Web Service generates a signed url limited in time and forwards it to the customer app.
- The url is inserted in the customer html body response to the end user.
- The user browser retrieves the asset from the storage.
Advantages of this infrastructures :
Every brick of the infrastructure is managed by google, it’s automatically wired to their tools (centralized logging, stackdriver monitoring and alerting, appengine live debugging, appengine audit, backuping, high availability, scaling etc …).
Every tools adapts their price to the use, every element of the infrastructure can be shut down automatically when there is no request to the service. And the monthly cost drop drastically compared to a Iaas or On premise price.
Service is extremely fast since we use Google Cloud Platform network exclusively.
No more server management (bootstrap, patch etc), all the saved time is used to improve and evolve the app.
We have now time to evolve and innovate the app since all the infrastructure is autonomous.