"I'm not scared of a computer passing the Turing test. I'm terrified of one that intentionally fails it." There is no doubt the AI is currently one of the hottest trends of the tech world. Think of Facebook's face recognition engine, driverless cars and autopilots with AI behind them. Artificial intelligence (AI) has astounding potential to accelerate scientific discovery in defence, security, humanitarian aid, biology and medicine and to transform health care. AI do present what we consider top notch breakthroughs of modern age. However advancement in A.I. can easily pose a threat to human dignity, technologies have reached a point at which the deployment of Lethal Autonomous Weapons Systems (LAWS) — practically if not legally — feasible within years. LAWS have been described as the third revolution in warfare, after gunpowder and nuclear arms. LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill. Think about machine replacing humans in industries and the amount of unused potential because of that. What about AI replacing humans in positions that require respect and care? Positions such as a customer service representative, a therapist, a nursemaid or elderly, a soldier, a judge or a police officer. If an AI program (such speech recognition) exists that can understand natural languages and speech, then, with adequate processing power it could theoretically listen to every phone conversation and read every email in the world, understand them and report back to the program's operators exactly what is said and exactly who is saying it posing the threat to one’s privacy. Since AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts for the safety of the human generations to come. Doesn’t OpenAI, while providing safety from rouge researches, exposes us to the harm of the research falling into nefarious hands? What about robot rights? Moral obligations of society towards its machines, like human rights or animal rights. If a machine is capable of intelligence which is at par of that of a human being, doesn’t that entitle the machine right to life and liberty, freedom of thought and expression and equality before the law?