Future WEB?

What will be the future web programming architecture.

Let us see the current trend first,

  So Mostly current web sites operating on Server - Client architecture in which client (e.g. browser) requests to the server and server respond with HTML format. Now a days most of the websites are using REST calls to get data from server in deep client and server both have their own frameworks (e.g. client side - angular js, knockout js, server side - JEE - java, ROR, Cakephp - PHP.). So seperating as different pieces utilizing the server and clients properly which means not giving more load to particular side (good balance).

  Current technologies are good enough in terms of performance, scalability e.t.c. But still we need to invest more on hardware to handle requests, but in client side we can improve more, we may reduce the server work and give it to client since each user will have a client whereas there will be only one server(in simple case) serving requests.

  Nodejs (javascript can write server code) is a booming technology which may invent more ways for the future,


What will be the future?

  Well it is good to go with improving client side frameworks. I am thinking of having a middle component between server and client. is that possible? if so how we can design it? lets see,

  Middle component can be in client side, which will share most of the server work, and allowing a request to hit server only on persistence related stuffs where as middle component will see the sessions, business and other stuffs. Challenging task here is to hiding the middle component source from the user. So we should find a way for it. Node js can help to achieve it.

So when we achieve above architecture the server load will be reduced tremendously, no need to upgrade the server hardwares since users going to share more memory.. :-)

Lets look forward for (Client-Middle component) - Server architecture may be. :-)


What will be our future Bigdata? How to store data in future?

Bigdata is mostly successor of normal databases, which is a collection of tools that handles data of data. So present trafficful sites implemented this at their backend. I am thinking of what will be the future to store TBs of  TB data. We are in a situation to believe big applications dumping Tera bytes of data each day. Where to store these data if this goes up and on? Lot of cost has to be invested for hardware and proper accommodation has to be allocated for those hardwares. I am thinking of some way which will reduce the current cost in future.

 How can we do this? Yes you may think the term compressing (i don't want to bold it since it may impress your eyes to skip to this line ;-) ) of data to reduce its size. Yes it may also used now but how to compress a GB of data to a Mega BYTEs or Kilo Bytes or Bytes(i am crazy). Is that possible? If we make it possible this is a way to do it.

Current compressing algorithms gives you some flexibility to reduce the size by 100 or even 2*100MB of a GB data, and we need to reduce it to 100MB or even low.

I am thinking of way to achieve it.

Just an illustration,

Think of upload and sharing your Birthday party video (feel free to see Birthday party as whatever you like to upload ;-) ).

Upload a file:
  1. Upload the 100MB video to an application.
  2. Compress it to say 10MB by the future algorithm.
  3. Save it to some storage.
Download a file: Yes it should be reverse of upload

  1. Decompress it to 100MB video.
  2. Download 100MB file.
Or even.

How about if clients (in client-server terminology) are given an additional work of decompressing?
  1. Download a 10MB file. (Sounds awesome isn't? you will save data for more Movies and music :-))
  2. Decompress it into 100MB file.


Yes of course there will be security problems we should fix them.