Today I don’t have any work to do in my office and I am just sitting in my chair with a cup of tea in front a PC  with my lappy but Before I start talking about bitcoin, I would like to mention that it is one of my favourite  topic in technology and content I am writing this article with consideration of Mr. Satoshi Nakamoto’s original paper.       

Link :-
So  what is bitcoin ?


Bitcoin, often described as a cryptocurrency, a virtual currency or a digital currency – is a type of money that is completely virtual. It’s like an online version of cash. You can use it to buy products and services, but not many shops accept. Bitcoin is a new currency that was created in 2009 by an unknown person using the alias Satoshi Nakamoto. Transactions are made with no middle men – meaning, no banks! Bitcoin can be used to book hotels on Expedia, shop for furniture on Overstock and buy Xbox games. But much of the hype is about getting rich by trading it. The price of bitcoin skyrocketed into the thousands in 2017. 

Bitcoin nowadays is not only a cryptocurrency or a digital payment system. Actually thanks to its unique features bitcoin has become a real instrument for investment, saving and even earning more money. Bitcoin is a consensus network that enables a new payment system and a completely digital money. It is the first decentralized peer-to-peer payment network that is powered by its users with no central authority or middlemen. From a user perspective, Bitcoin is pretty much like cash for the Internet.

Well How bitcoin actually work ?

From a user perspective, Bitcoin is nothing more than a mobile app or computer program that provides a personal Bitcoin wallet and enables a user to send and receive bitcoins. 

Behind the scenes, the Bitcoin network is sharing a massive public ledger called the “block chain”. This ledger contains every transaction ever processed which enables a user’s computer to verify the validity of each transaction. The authenticity of each transaction is protected by digital signatures corresponding to the sending addresses therefore allowing all users to have full control over sending bitcoins.


Developer Mr. Satoshi Nakamoto who define electronic coin as a chain of digital signatures. The solution he propose begins with a timestamp server. A timestamp server works by taking a hash of a block of items to be timestamped and widely publishing the hash, such as in a newspaper or Usenet post [2-5]. The timestamp proves that the data must have existed at the time, obviously, in order to get into the hash. Each timestamp includes the previous timestamp in its hash, forming a chain, with each additional timestamp reinforcing the ones before it. 

Source :- Satoshi Nakamoto’s original paper 

New transaction broadcasts do not necessarily need to reach all nodes. As long as they reach many nodes, they will get into a block before long. Block broadcasts are also tolerant of dropped messages. If a node does not receive a block, it will request it when it receives the next block and realizes it missed one. 


By convention, the first transaction in a block is a special transaction that starts a new coin owned by the creator of the block. This adds an incentive for nodes to support the network, and provides a way to initially distribute coins into circulation, since there is no central authority to issue them. The steady addition of a constant of amount of new coins is analogous to gold miners expending resources to add gold to circulation. In our case, it is CPU time and electricity that is expended. 
The incentive can also be funded with transaction fees. If the output value of a transaction is less than its input value, the difference is a transaction fee that is added to the incentive value of the block containing the transaction. Once a predetermined number of coins have entered circulation, the incentive can transition entirely to transaction fees and be completely inflation free. 


The traditional banking model achieves a level of privacy by limiting access to information to the parties involved and the trusted third party. The necessity to announce all transactions publicly precludes this method, but privacy can still be maintained by breaking the flow of information in another place: by keeping public keys anonymous. The public can see that someone is sending an amount to someone else, but without information linking the transaction to anyone. This is similar to the level of information released by stock exchanges, where the time and size of individual trades, the “tape”, is made public, but without telling who the parties were.

See an example diagram below:-

Source:- Satoshi Nakamoto’s original paper 


  1. October 31, 2008: Bitcoin whitepaper published.
  2. January 3, 2009: The Genesis Block is mined.
  3. January 12, 2009: The first Bitcoin transaction.
  4. December 16, 2009: Version 0.2 is released.
  5. November 6, 2010: Market cap exceeds $1 million USD.
  6. October 2011: Bitcoin forks for the first time to create Litecoin.
  7. June 3, 2012: Block 181919 created with 1322 transactions. It is the largest block to-date.
  8. June 2012: Coinbase launches.
  9. September 27, 2012: Bitcoin Foundation is formed.
  10. February 7, 2014: Mt. Gox hack.
  11. June 2015: BitLicense gets established. This is one of the most significant cryptocurrency regulations.
  12. August 1, 2017: Bitcoin forks again to form Bitcoin Cash.
  13. August 23, 2017: SegWit gets activated.
  14. September 2017: China bans BTC trading.
  15. December 2017: First bitcoin futures contracts were launched by CBOE Global Markets (CBOE) and the Chicago Mercantile Exchange (CME).
  16. September 2018: Cryptocurrencies collapsed 80% from their peak in January 2018, making the 2018 cryptocurrency crash worse than the Dot-com bubble’s 78% collapse.
  17. November 15, 2018: Bitcoin’s market cap fell below $100 billion for the first time since October 2017.
  18. October 31, 2019: 11th anniversary of Bitcoin.


  • Irreversible :-After confirmation, a transaction can‘t be reversed. 
  • Secure :-Bitcoin funds are locked in a public key cryptography system. 
  • Pseudonymous :-Neither transactions or accounts are connected to real-world identities.
  • Fast and global :-Transaction is propagated nearly instantly in the network and are confirmed in a couple of minutes.


In the end we see about bitcoin like what it is?, How it is beneficial for us?, Privacy policy regarding users and some key high light.

Continue Reading

Participating in open source project contribution

Open source project is  the source code  project that is publicly available for contribution that anyone can view and modify.

Contribution in open source becomes trending and developers who  building software that’s intended for sharing, collaborating, and redistributing can use the trademark if the distribution terms of the software fit within OSI’s definition of open source. 

The distribution terms are a set of principles to follow, similar to a code of ethics. Which can found here

What are the benefits of contributing in open source project ?

First things first. Contributing to open source projects won’t help you earn money except in very few cases, like Google’s Summer of Code program for students. As a student,  You wonder then why should he or she really bother spending some time working on an open source project if there’s no monetary benefit? Well there are lot’s of benefits in contributing in an open source project Mostly it helps you grow as a individual like it helps you to to learn  new technologies or specialize in a  particular technology and  also it  helps you  to build a reputation  among other contributors ,Generate the awareness of trending technologies and it helps you to grow yourself . So in here we going talk about the benefits of contribution in open source and here they are as follows:-

  • It helps you learn :- You heard this before that there is no learning experience better than working on an actual project yourself. If you are working on an open source project with a thriving community, you’ll get tons of feedback on your work and learn to adapt very quickly, getting better with every contribution.
  • Work with great people :- You’ll get a chance to work with some of the most experienced developers, programmers, designers, from around the world while working on open source projects.
  • CV Building :- Contributing to open source projects is a great mark to have on your CV. If you are graduating in Computer Science, having a degree just isn’t enough and your experience as a contributor to open source projects can be the one thing that catches an employer’s eye.

Here are some of the paid open source programs held for university students

  1. Google Summer of Code :- According to google, Google Summer of Code is a global program focused on bringing more student developers into open source software development. Students work with an open source organization on a 3 month programming project during their break from collages/universities. Link:-
  2. Outreachy :- By wikipedia , A Outreachy (previously the Free and Open Source Software Outreach Program for Women) is a program that organizes three-month paid internships with free and open-source software projects for people. The goal of Outreachy is to “create a positive feedback loop” that supports more women participating in free and open-source software. Link:-
  3. Rails Girls Summer of Code :- Another global fellowship program for women and non-binary coders. Students receive a three-month scholarship to work on existing Open Source projects and expand their skill set. It is a not-for-profit organisation that operates solely on the generous donations of sponsors and individuals from the community.      Link:-
  4. Google Code-In (Pre-University) :- An annual programming competition hosted by Google LLC that allows pre-university students to complete tasks specified by various, partnering open source organizations. Link:-

There are various other global open source programs  are held on annual basis. Such as Summer of Haskell , Mozilla’s Winter of Security  , KDE Summer of Code- An alternative to GSOC ,Free Software Foundation and many more.

Continue Reading

Writing a Resume

Well many of us face this challenge before applying in any corporate. In here we discuss about the worst mistake that we should avoid while writing  resume for particular corporate. Some of them are as follows.

  • Typos and grammatical errors :-  This is a basic mistake that we do while creating an resume and it obvious that we should avoid this mistake. Resume with grammatical errors shows that the candidate is careless and not even care about the job and it directly impact to recruiter with not-so-flattering conclusions about you, like, “This person can’t write,” or, “This person obviously doesn’t care.”. Well This is an easy problem to fix. Just make sure you run spell check and have at least one other person read your resume before you send it in. Go slowly when putting your materials together. Putting together a resume can be a difficult task, but taking your time to think about what to include and how to avoid common problems can help you land that interview.
  • Lack of specifics :-  This is the part where employer need to understand what you’ve done and accomplished but be careful with this section where  creating a resume seems pretty complicated especially when the job that you are applying for, skills that you have taught yourself are not fits the job requirements .
  • Attempting the “one–page–fits–all” approach :-  It is good that and you should try to describe yourself in one page. Different hiring managers look for different resume lengths.  Employers want to feel special and want you to write a resume specifically for them. They expect you to clearly show how and why you fit the position in a specific organization.
  • Highlighting accomplishments :- A resume should be accomplishment-oriented, not responsibility-driven. The biggest mistake a jobseeker can make when they build a resume is to include a long list of the responsibilities associated with their past roles without context. A list of responsibilities doesn’t grab anybody’s attention.
  • Going on too long or cutting things too short :- Many people try to squeeze their experiences onto one page, because they’ve heard resumes shouldn’t be longer. By doing so, job seekers may delete impressive achievements. Other candidates ramble on about irrelevant or redundant experiences.
  • Bad Objective :- Most of resumes losses reader’s attention or grab at first line of  paragraph. Remember the  objective is the most important  part in your resume, this will lay the basis of their attention and impression of you. 

Continue Reading



A software which designed to make it easier to create, deploy, and run applications by using containers. In other words, a  docker  is an open source  &  containerization platform that packages your application and all its dependencies together in the form of a container to ensure that your application works seamlessly in any environment.  Well now we know what is docker but what about container ? What is it ?. 

 Containers allow a developer to package  up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. With DOCKER, you can treat containers like extremely lightweight, modular virtual machines and you get flexibility with those container – you can create, deploy, copy, and move them from environment to environment, which helps optimize your application for the cloud.


Now the another question arises this time “In what case we should use docker ?” Well as we know docker is a container and definitely it contains something in it like every container does. In docker container it contains  libraries and other dependencies for providing particular functionality so understand more  let’s consider an example  

Example :- A company needs to develop a Python Application. In order to do so the developer will setup an environment with Django server installed in it. Once the application is developed, it needs to be tested by the tester. Now the tester will again set up Django environment from the scratch to test the application. Once the application testing is done, it will be deployed on the production server. Again the production needs an environment with Django installed on it, so that it can host the Python application. If you see the same Django environment setup is done thrice.  This causes loss of time & effort and highly chances of version mismatch in different setups i.e. the developer & tester may have installed Django (old version), however the system admin installed Django (newest version) on the production server.

Now try to deploy the same application using docker. In this case, the developer will create a Django docker image ( A Docker Image is nothing but a blueprint to deploy multiple containers of the same configurations ) using a base image like Ubuntu, which is already existing in Docker Hub (Docker Hub has some base docker images). Now this image can be used by the developer, the tester and the admin to deploy the Django environment. This is how docker container solves the problem simply 

Now we know that what is docker? What is docker container and see one particle example. Its time to see the advantages of docker.


Following are some advantages of Docker, let’s discuss them in detail

  1. Rapid development :- It can decrease deployment time.  It is because of the fact that it can create a container for every process. So, even without worrying about the cost to bring it up again, it would be higher than what is affordable, Data can be created as well as destroyed. With the help of a Docker, we can build a container image and can further use that same image over every step of the deployment process. The advantage of it is the ability to separate non-dependent steps and also run them in parallel. In addition, the duration of time it takes from build to production may speed up notably.
  2. Security :- Docker makes sure that applications that are running on containers are completely set apart from each other, from a security point of view, by granting us complete control over traffic flow and management.
  3. Simplicity & Faster Configurations:-The way Docker simplifies the matters is one of the key benefits of it. It gives flexibility to users to take their own configuration, put that into the code, and further deploy it without any problems. However, the requirements of the infrastructure are no longer linked with the environment of the application, as Docker can be used in a wide variety of environments.


  1. Run applications as fast as a bare-metal server :- In docker , An application cannot work as much as speed  as similar to bare metal  performance(speed)
  2. Provide cross-platform compatibility :- Docker does not  provides a cross platform support such as one program build in windows’s docker cannot run in linux’s docker
  3. Run applications with graphical interfaces :- Default docker not provide GUI support 

Above are the major limitation I see while using docker.


In the end we see that “what is docker?” , “How it is useful for us ?” with a practical example also we see both its major pros and cons .

Continue Reading

Houdini Software

Hello guys, I am Manshu Sharma and Today I am going to introduce a new (not so new 🙂 ) VFX software that totally amaze me when I tried and that is Houdini, A 3D animation and special effects software program created by Side Effects Software company located at Toronto which is in Canada. This  software is used by the artists who are working in the field of 3D animation and VFX for Web, Film and Video games etc. It is one powerful program that blends different different worlds into a single platform.  In here we going to brief discuss about Side effect’s software Houdini like why should we use it ? Or What Exactly Does Houdini Do?  But before that let’s just discuss about the history of Houdini.


Houdini is produced by SideFX sometimes written as Side Effects Software, which is based in Toronto, Canada. The company was started in 1987 with  an aim to bring 3D graphics to a  people who are in this field. Their first software was called PRISMS, and was a 3D graphics program based in procedural generation. later on another software is created called Houdini inspire from  PRISM which released in 1996 with additional improvement as compare PRISM after that it is being updated regularly ever since.

What Exactly Does Houdini Do?

Houdini is produced by SideFX sometimes written as Side Effects Software, which is based in Toronto, Canada. The company was started in 1987 with  an aim to bring 3D graphics to a  people who are in this field. Their first software was called PRISMS, and was a 3D graphics program based in procedural generation. later on another software is created called Houdini inspire from  PRISM which released in 1996 with additional improvement as compare PRISM after that it is being updated regularly ever since.

You can check about more details related to Houdini’s version, history etc at wikipedia page   Link:-

What Exactly Does Houdini Do?

It uses a node-based interflow that helps the users in exploring the error as they modify and refine their work. Assets in Houdini are generally created by connecting a series of nodes. The advantage of this workflow is that it allows artists to create detailed objects in a relatively short number of steps as opposed to other programs.

 Maya, on the other hand, is popular 3D software but it is very difficult in Maya to return to a previous version of the work. While this allows multiple redoes  and undos that helps in making efficient changes and developing animations easily. Artist uses Houdini primarily for a dynamic environment and particle effects. But Houdini is loaded with tools that allow the users to model, animate and render files as per the requirement of the user. It also provides standard geometric modelling tools and animation by using Keyframes.

Houdini is best known for its advanced dynamic simulation trick which allow for the creation of highly realistic visual effects. New efficiencies in the software mean that artists can achieve state-of-the-art effects on less advanced hardware. Also Houdini has a node-based lightning system that provides a flexible work environment for building shaders and creating Computer graphics effects too. It features a powerful volumetric systems for creating smoke and fire simulations, as well as a compositor for layered image effects.

Why Should we Use Houdini?

Houdini is particularly suited to visual effects artists with a technical background.It provides all the editing tools expected in 3D software but is most known for its VFX tools and the node-based procedural nature of its workflow.

 It also provides ease user interface to code, manage and distribute. More and more, VFX studios are transitioning to using SideFX’s Houdini into their post-production houses because of its versatility, power, and its impressive level of quality.

 A major benefit of Houdini is the built-in procedural generation for VFX. Everything from destructions and deformations, to hair, ice crystals, waves, bubbles, fire, and more – can all be procedurally generated inside of Houdini.  


As we know the Houdini software initially released in 1996 and from that to now its current version is Houdini 17.5 release in   23 March 2013 for windows , linux and Mac OS X 


I already mentioned about the benefits in the “Why Should we Use Houdini?” but there will cheating you guys if I am not write down the whole benefits in front of you.

I already mentioned some benefits in the “Why Should we Use Houdini?” but this is  cheating with you guys if I am not write down the whole benefits in front of you. So let’s get started.

  1. Having localised work flow :- Houdini features 3D modelling, animation, rigging, destruction, particle and fluid simulation, procedural generation, and much more – all in one place. 
  2. Ease in collaboration :- Having localised work flow within a single program really speeds up collaboration between VFX teams. 
  3. Procedural Programming :- Procedurals refers to the adherence to established procedures. As it create a system that will then create more elements. This comes with few benefits like 
    1. Unlimited redo and undo is possible which is hard in Maya if you want to go to previous version of project
    2. You can create a reusable  motion graphics system using a robust tool set.
  4. Fully integrated :-  Unlike other software, Houdini does not require any additional plugins to create the effects that you desire. The core functionality of Houdini would allow you to create explosions, smoke and fire, realistic fluid movement, or crowd layouts.
  5. Render Engine :- Houdini come with its render engine called MANTRA that comes with two operating modes 
    1. First, Micro polygon Rendering :-  Recommended for the for scenes that involve fur, smoke, or sprites. You can also use it to match your existing footage.
    2. Second, PBR  or Physically Based Raytracing :-  Recommended for the for scenes that involve real-world light, shadows, reflections, secondary bounces, etc. You would not need to write your own shaders.
  6. Create and Manage Complex System :-  Thanks to its node-based interface, Houdini can let you create complex systems. First of all, you do not need to create plugins anymore. Secondly, you can wire the output of one node into several other nodes to easily create complex systems.
  7. Attributes :- Houdini also excels at how data is stored and processed through the software. Data is stored in attributes, which you can then manage and manipulate to create any desired effect. 
  8. Nodebased Interface :- Houdini uses a node-based interface for creating an mazing & efficient motion graphic procedure. Compared to layer-based systems, Houdini can let you position various elements as you see fit without changing all the elements for every layer.

Installation on différent platforms

You can find installation instruction for whatever platform you like in the link below


Continue Reading