Monday, July 17, 2017

Http 2.0

HTTP 2 is a binary transfer protocol which addresses the Short comings of its predecessor with full request and response multiplexing, compression of HTTP header fields, and add support for request prioritization and server push.

Shortcomings of H1 (HTTP 1.1) and Work Arounds

One Request Response Cycle per TCP Connection
   Work around with Multiple TCP Connection (4-8 Connections) and it worked now you   could send 4-8 Parallel Request over Multiple TCP Connection to the Server 

Cost of Multiple TCP Connections

 Sockets both at Client and Server Consumes a lot of Resource, extra memory Buffers and CPU overhead. Bandwidth Sharing Issue between Parallel TCP Streams.

More Workarounds

·        Domain Sharding: Faking Domains to acquire More Parallel Request
·        Image Spiriting, JavaScript and CSS file Concatenations
·        Vulcanizing:  Concatenating your web Components and more

HTTP2 to the rescue:

Since it enables Multiplexing (Multiple Requests Response Cycles in parallel over a single TCP Connection) all the Workarounds mentioned above can be avoided.

 

 

Header Compression

Each HTTP transfer carries a set of headers that describe the transferred resource and its properties. In HTTP 1.x, this metadata is always sent as plain text and adds anywhere from 500–800 bytes of overhead per request, and kilobytes more if HTTP cookies are required To reduce this overhead and improve performance, HTTP 2.0 compresses header metadata.

Request Prioritization with h2

To accelerate the load time of the page, all modern browsers prioritize requests based on type of asset, its location on the page, and even learned priority from previous visits.
HTTP 2.0 does not specify any specific algorithm for dealing with priorities, it just provides the mechanism by which the priority data can be exchanged between client and server (FRAME PRIORITY ID’s). None of the major Webserver has this implementation.

Server Push

A powerful new feature of HTTP 2.0 is the ability of the server to send multiple replies for a single client request. That is, in addition to the response for the original request, the server can push additional resources to the client without the client having to explicitly request each one.

Friday, July 7, 2017

Azure Service Fabric 101 : Why Service Fabric for MicroServices

Service Fabric is a distributed systems platform used to build hyperscalable, reliable, and easily managed applications for the cloud.

The image below tells it all, 


Here I will be mentioning the advantages of using Service Fabric to develop and manage your Microservices at the high level 

Highly scalable

Service fabric allows auto scaling based on CPU consumption, Memory and more,
Maximizes resource utilization with load balancing partitioning and replication across all nodes in a Cluster

Partition support

Service Fabric supports partitioning of Microservices. Partitioning is the concept of dividing data and compute into smaller units to improve the scalability and performance of a service.

More info here 

Rolling updates 

To achieve high availability and low downtime of services during upgrades, Service Fabric supports rolling updates. This means that the upgrade is performed in stages. The concept of update domains is used to divide the nodes in a cluster into logical groups which are updated one at a time.

State Redundancy 

Service Fabric natively integrates with a Microsoft technology called Reliable Collections to achieve collocation of compute and state for services deployed on it
More about Reliable Collection here.

High-density deployment
Every Microservice hosted on Service Fabric will be logically isolated and can be managed without impacting other services. This level of granularization in turn makes possible achieving a much higher density of deployment.
Another notable advantage of using Service Fabric is the fact that it is tried and tested. Microsoft runs services such as Azure DocumentDB, Cortana, and many core Azure services on Service Fabric.


Automatic fault tolerance
The cluster manager of Service Fabric ensures failover and resource balancing in case of a hardware failure. This ensures high availability of the services while minimizing manual management and operational overhead.

Heterogeneous hosting platforms
A key advantage of Service Fabric is its ability to manage clusters in and across heterogeneous environments. Service Fabric clusters can run on Azure, AWS, Google Cloud Platform, an on-premises data centre, or any other third-party data centre. Service Fabric can also manage clusters spread across multiple data centers. This feature is critical for services requiring high availability.

Technology agnostic
Service Fabric can be considered as a universal deployment environment. Services or applications based on any programming language or even database runtimes such as MongoDB can be deployed on Service Fabric.
Service Fabric supports four types of programming models – Reliable Services, Reliable Actors, Guest Executable, and Guest Containers.
Will come back to this in more detail later

Centralized management
Monitoring, diagnosing, and troubleshooting are three key responsibilities of the operations team. Services hosted on Service Fabric can be centrally managed, monitored, and diagnosed outside application boundaries. While monitoring and diagnostics are most important in a production environment, adopting similar tools and processes in development and test environments makes the system more deterministic. The Service Fabric SDK natively supports capabilities around diagnostics which works seamlessly on both local development setups and production cluster setups.
More on this later.

And more
Service Fabric can do service discovery, Orchestration and solves the Circuit breaking problem and much more