First mover (dis)advantage in industry and academia

A month or so ago, Google held a press release to announce their latest result in quantum computing using superconducting qubits. The full technical paper is up on Physical Review X and arxiv, the easier to understand summary on their research blog. Shortly after that my friend and former classmate Shantanu Debnath was part of the new breakthrough result from University of Maryland for making programmable trapped ion quantum computer. I found it interesting that the latest advancements in QC/QI are coming from both industry and academia.

I went to academia with a very staunch belief that research needed to be unbiased and unfettered by monetary constraints. Of course everything still costs money, but academia does create an approximation of the ideal. Absence of direct monetary incentives and tenure for professors encourages curiosity-based research and taking risks. There is definitely some element of ‘selling’ your research, which many researchers might find a bit distasteful. However the ‘buyers’ are often other researchers : peer reviewers, funding agency program managers and journal editors. Even the worst of this bunch belongs in the top 0.01% of the world in their respective fields.

Having jumped ship to industry now, I am trying to observe how research in industry works. At first glance, it seems that all of the above criteria are violated. Research in industry is always motivated by monetary gains, either for a company or an individual. Researchers also usually do not have the security of tenure. Their position is under much greater threat from market forces, management decisions and economic factors beyond their control. Finally, even though they might publish their work in peer reviewed journals, the measure of success of these researchers is ultimately tied to the success of the company. In that sense, the true ‘buyer’ of their work is not a specialized group, rather it is the general public. ‘Selling’ research to the general public is far more difficult and requires vastly more packaging.

The interplay of all these forces makes for a fascinating arc through history. Almost always academic research spends at least 10-30 years pursuing a given direction of research before anyone in industry get wind of it. First mover advantage is definitely huge in academia. The most agile research groups which are able to invest in a new technology can reap the rewards for many decades to come. The most relevant example is NIST Maryland, which pioneered laser cooling technology back in 1990’s. Today the collaboration of NIST and University of Maryland has some of the best technology in laser cooling and optical traps which is still making them a leader in related fields decades later.

Once the academics have chipped away at a difficult and risky problem and brought it within striking distance, a few bold souls from the private sector will attempt to seize the first mover advantage. The strong monetary and competitive incentives push industrial research rapidly in the next few years. Yet, as I am learning more about the history of innovation in the Bay area, it seems that when it comes to world changing technologies, being the prime mover in the private sector is not necessarily an advantage.

GO Corporation, General Magic and to some extent Fairchild semiconductors are examples of companies that could not last long enough to see the fruits of the technologies that they themselves pioneered. In every case, the companies that came around 10-20 years later perfected their product when the technology was more mature and the general public more receptive. Intel, born out of a dying Fairchild, went on to become one of the most dominant corporations ever. Apple used GO and General Magic’s vision, and some of the founders, to create the iPhone which is probably the single most successful product in history.

I think in academia being too early is not as disastrous. For one academic research does not have to turn a profit year after year to sustain itself. Secondly since the adjudicators of success are often other scientists, it is possible to convince them of the value of the work even if experimental confirmation and technological validation are many years in the future *cough* neural networks, string theory *cough*.

But being too early is death in the marketplace. Tony Fadell (the man behind the iPod and iPhone) has explained in many-a-interview how Apple literally sculpted the market by first selling the iPod and then adding one feature at a time (video, podcasts, iTunes) until the market was ripe for the iPhone. The general public is not much into beta testing new technologies, so the first few iterations of consumer technologies are often destined to fail.

It is tempting to extrapolate this “first mover disadvantage” to today’s hubbub around quantum computing. The seeds of quantum computing were sown in 1970’s and it has been a hotbed of academic research in the last couple of decades. In the last few years, QC has rocketed into mainstream news with every major tech company angling to have a stake in the future.

Realistically I think that a consumer product utilizing quantum computing is still some distance away. Of course, as with most research, unforeseen challenges and unexpected discoveries may drastically alter the timeline. But based just on my personal assessment of the past developments, estimated time to consumer for QC should be a minimum of 10 years, maybe 20. That also seems exactly like the time-frame that pioneers of previous revolutions took to file bankruptcy so that the next generation of companies could stand on their shoulders and change the world. The bold claims of General Magic in 1993 to build an anytime, anywhere handheld communications device seem like a close parallel to D-Wave announcing the world’s first commercial quantum computer in 2011.

So am I saying that all the companies that are around today investing heavily in quantum computing will be gone before you can solve the travelling salesman problem on your phone? 🙂

Google’s latest quantum computing results explored!

The D-Wave quantum computer has been a focal point of great controversy in the academic and research communities. Back when I was doing my PhD in Quantum Optics and Quantum Information and technically in the race to build a quantum computer, I have to admit to being mildly skeptical of D-Wave’s claim of having made the first commercial quantum computer.

This attitude was a result of two facts – first, for a long time D-Wave staunchly maintained that its research was proprietary and did not allowed open peer review of its work. Secondly, many accomplished and reputable physicists have openly stated, both in the press and to me personally, that in their opinion the D-Wave is not really a quantum computer.

On Dec 8th, Google issued a big press release in collaboration with D-Wave. The bold claim states that the D-Wave ‘quantum annealer’ does in fact beat the equivalent classical algorithm by an astonishing factor of 10^8. The team was nice enough to release a paper on Arxiv explaining the theory and experiment behind the statement. The paper has yet to be peer reviewed, so I decided to check it out myself.

The title of the paper is “The computational value of finite range tunneling“.

Quantum tunneling is a phenomenon that allows a system to cross an energy barrier without actually climbing over the barrier, rather tunneling through it. The most common analogy is throwing a ball at a wall and having it come out on the other side of the wall. This phenomena arises from the fact that according to quantum physics, objects are not rigid entities but clouds of probability density. Therefore what we physically recognize as the ball is simply the region that has the highest probability of finding the ball. However there is a non-zero probability of finding the ball outside this region, say on the other side of the wall.

For large macroscopic objects like balls and walls, this probability is vanishingly small and we would never see this happening even if we tried it for a billion billion years. However when we go down to the microscopic scale, these quantum effects can become quite substantial and quantum tunneling has been observed for particles like electrons and protons.

The paper, as the title states, aims to establish just how useful this quantum effect can be for computing.

And according to the paper, it is very useful when applied to simulated annealing problems. Which begs the question,

What is simulated annealing?

Some computational problems can be posed as an optimization problem, i.e the parameters of the question are embedded into a mathematical function in such a way that the answer to the question is the minimum value of the function.

However, finding the minimum of an arbitrary function is not always easy, especially if the function has multiple local minima. In such cases, sometimes it is preferable to find a good approximate answer rather than the exact answer which might take a long time. This is where the concepts of annealing come into play.

The system is assigned a ‘temperature’, which governs the probability of the system to jump to another state. Then the system is allowed to ‘cool’, i.e the probability of these jumps is slowly decreased over time. As a result, the system spends more time in states which have a low ‘energy’ which is given by the optimization function. Therefore, as the system ‘cools’, it is more likely to reach a state close to the global minimum state than any other.

The origin of this process comes from metallurgy where worked metals are heated and allowed to recrystallize slowly into a low energy, low defect state. Simulated Annealing (SA) is simply this process being simulated on a computer.

The probability that the simulated annealing jumps out of a local minimum well decreases with temperature but also decreases exponentially with the height of the wall surrounding the well. Moreover wells which have a narrow width have a smaller probability of the system landing in them. Therefore SA performs very well when the optimization function has many shallow, wide wells but very poorly when the function has deep, narrow wells.

But guess what works really well if you want to jump through really tall walls? Quantum tunneling. Using a quantum system for annealing, or Quantum Annealing (QA), eases the computational cost of SA by adding an additional method by which our (quantum) ‘ball’ can get from the shallow well to the deeper well. The exponential nature of the cost means that even a small tunneling probability can mean a huge gain in computational power.

That covers the introductory sections of the paper. The bulk of the paper describes the specifics of the function, and the quantum setup they used. The main points can be summarized thus –

  • The quantum system used in the D-Wave is a grid of Qbits. A Qbit (or quantum bit) is the quantum analog of a two level system which can be in either state 0 or state 1.
  • Different algorithms were tested on the same problem and compared on the number of steps required to reach the answer with 99% confidence. The test candidates were QA, SA, an algorithm called Quantum Monte Carlo (QMC) and other classical algorithms.
  • The problem was constructed specifically to cause the worst case performance of SA. Therefore these results do not represent the performance of QA vs SA on an average problem.
  • Under these conditions, it was found that QA can have as much as 10^8 times fewer steps required to arrive at the solution than SA or QMC.
  • SA is very bad at problems that contain many deep and narrow wells that are separated by tall walls because the jumping probability decreases exponentially with the height of the barriers.  Here QA can solve the problem much much faster.
  • QA is bad at problems where the wells are far apart, because the tunneling probability decreases exponentially with separation between the wells.  Here QA offers no additional benefit over SA.
  • The tests were repeated for  180, 296,489,681 and 945 variables to see how the solving time scaled with number of variables.
Tunneling provides a large advantage to QA over SA when finding the minimum between A, B and C, however in the long range transition from C to D, QA loses its edge over SA.
So back to the original question. Is the D-Wave a quantum computer?

Well, the D-Wave does indeed display some distinctly quantum phenomena such as entanglement and tunneling. However the paper finds that the D-Wave has the same scaling as QMC. I will not elaborate on QMC here except that it can be efficiently solved on a classical computer. The D-Wave is reported to be much faster than the QMC, but by a large polynomial factor, not an exponential one. Therefore the D-Wave does not offer a quantum speedup over classical computers and therefore, strictly speaking, cannot be called a quantum computer.

However whether or not the D-Wave is a true quantum computer is more of an academic question (and a question about the PR strategy of D-Wave).

Optimization problems are a very important class of problems, especially in the context of machine learning and artificial intelligence. Which is probably why Google has pumped so much time and money into testing the D-Wave. It seems that at the very least this investment will yield new insights into the nature of algorithms and quantum computing, if we are lucky it might yield a generation of quantum annealers which are several orders of magnitude faster than today’s computers despite not being ‘true’ quantum computers.

Practically speaking, if the D-Wave indeed has a large advantage over a classical computer on certain problems, then all one cares about is whether those problems are worth solving. The paper suggests a couple of algorithmic problems which might utilize the power of this computer which they hope to expand in the future. They also predict that the they expect that the next generation of quantum annealers would be even more powerful. Whether the D-Wave can become more than an controversial curiosity will depend on their ability to make good on that promise.

From Academia to Corporate

I just finished my first few weeks at Google Inc. As quick as my actual transition from grad school to a corporate gig has been, the emotional rollercoaster has been ramping up for quite a while. And now that I have finally embraced the dark side, I am often asked – why? Why did I spent 6+ yrs doing an MS-PhD in Physics and then leave academia to work in an unrelated field as a Quantitative Analyst at Google? Some just ask why anyone would do a PhD at all?

I started seriously thinking about writing this post when my friend (and entrepreneur) Vaibhav Devanathan asked me, “given the choice again, knowing what you know now, would you choose to do a PhD?”

Hmm… would I?

Would I choose to join my lab which (now I know) has to be punishingly cold and dark for the sake of state-of-the-art optics equipment? I used to think academia was a utopia, where brilliant people sit around together drinking coffee and solving equations on blackboards when they are not winning Nobel prizes. It is partly that, but it can also get very lonely when your funding is cut, or your experiments are not working or when everything works just not well enough. Now that I know that, would I still dare to choose that life? But most of all, would I still do a PhD, when I know that while all my friends are earning 5 times as much, I am spending 4+ years earning a degree that could actually decrease my value in the job market?

It is not an easy question to answer. Especially when it goes beyond hypotheticals and prospective students ask me if they should apply to grad school, it behooves me to be honest. So I decided to write down my thoughts, starting from why decided to do PhD in the first place to what I feel about that decision now.

School  (Woe be upon thee) 

Middle school, as far as I could tell, is where most students begin hating school. In the Indian school education, minutiae is often conflated with rigor, and the lack of choice with discipline. The ‘cool’ teachers would tell us then that high school would be better.  That’s when we could choose our subjects. Some of us went ‘Yayy no more social studies!’, while others went ‘Yaay, no more science!’.

When we got to high school however, we were hit by more unnecessary discipline, painfully dull curricula and criminally hypocritical exams. And yet again, I had a ‘cool’ class teacher who told us every day to pay our dues and get into a top university, and that will be a magical land of sincere learning.

Thinking back I realize he never actually said that, but it is what I understood at the time. And so like most people, I worked hard to get into the best college I could.

Undergraduate (stars in their eyes)

I coasted through most of my undergrad as a mediocre student. Perhaps I should have studied more, learned more and fought harder for grades. One of the reasons I did not (besides the fact that I was surrounded by really smart people and grading was relative) was that I thought I had done it; I had made it to the promised land where grades did not matter.

And to be fair, some things were better. No unnecessary discipline and some professors really tried. Yet for some reason, it was not enough to make me commit.

In my final year however, I had to confront the reality. My four years at IITB were magical, but I still hadn’t found what I was looking for. It had, however, given me hope, that there could be better things out there.

So I applied to grad school, not even sitting in for campus placements.

Grad School (And the faithful shall be rewarded)

And grad school delivered. For 6 years, I was surrounded by brilliant people, who despite our disparate backgrounds, life experiences and present circumstances, somehow shared with me one uniting sentiment. They were here because not even 16 years of education was going to stop them from learning.

The attitude towards learning and discovery that some of my colleagues, friends and advisors had was nothing short of inspirational. They were here because they wanted to learn about everything ‘we’ know.  Not how much you or I or the pope or the president knew, or how much you could fit in a sheet in a three hour slog, but what ‘we’ as a civilization, collectively through all the millennia of accrued knowledge, knew.

Of course not everyone thought this way, but enough did. There was plenty of bureaucracy, apathy and of course the pitiful salary, but these scarcely bothered me. It was relatively easy for me to lose myself in the joys of research, enough that for a while I really felt like I could be a graduate student for life.

But you can’t!

In the real world, graduate students have to, you know, graduate.

However the more I learnt about the academic life beyond grad school, the more I got disillusioned. Most professors spend all their time teaching, mentoring students and canvassing for their research, all the while navigating the bureaucracy of universities, journals and funding agencies. But that didn’t even sound half-bad against the 8-12 years between graduating with a PhD and becoming a tenured Professor.

The demands of publications, citations and funding dollars upon postdocs and untenured professors reminded me of the tangential success metrics of high school exams. But what infuses this nightmare with a Kafka-esque hilarity is that, unlike high school, academia success metrics are often at the mercy of social, political and scientific trends and vagaries.

I wrestled with my doubts and fears for over a year. Eventually, about 6 months ago, I decided I would not stay in academia.

I decided that the academic utopia I experienced was simply a sheltered atmosphere that could not last long. Realizing I would never be able to reconcile my unreasonable expectations with that reality, I decided to quit while I was ahead.

Was it all for naught?

I don’t think so.

Of course there are the tangible benefits. I have a shiny new degree, about 2-3 year worth of coursework in math, science and programming. I also have about 2-3 years worth of what can only be called work experience for lack of a better phrase. This work experience is different from industry I am sure, but different is not useless… I hope.

I personally do not believe switching from Physics PhD to Google is a loss. This is the first time after 10 years I am not surrounded by physics/physicists. It is a gamble; I also do not know how corporate life will suit me.  So far, I am enjoying my work at Google. Of course I do not know enough about job markets to know whether my future prospects have broadened or narrowed, time will tell I suppose.

And then there is the intangible. For many years I have been chasing this ideal place of learning that I imagined as a boy reading books about science. I got a brief (=4years :P) glimpse into that ideal world. I believe that vision might just save me from the crushing cynicism that I often see around me about education or life in general.

While it is true that we must all grow up to accept the real world, I also believe that we must do more than accept, that we must seek to somehow make every day better than the day before. The optimism to work towards that abstract, Sisyphean goal is hard to find unless we have some direction, a glimpse into our notion of an ideal world.

Others might have their own unique experience; it might be the research team that wants to create the future, or the company that cares for its employees or the startup idea written on a paper napkin or just that parent/teacher/spouse/friend who showed you that our world is full of possibilities and wonder. For me it was grad school. And though they cannot last, I believe these brief glimpses are necessary to remind us what we are working towards, for without them our world is bleak indeed.

So should you do a PhD?

Go to the job fair at your college/university. If ‘student’ sounds better to you than any of the titles/positions at job fair in your college/university, you should think about graduate school.

If you spent your school/UG years being frustrated (or disinterested) with academics but ended up reading about those subjects from books outside the curriculum, you should consider graduate school.

In other words, if your goal with starting a PhD after UG is something like getting a good job, or winning the Nobel prize, or that your friends/classmates are doing it, you will probably not finish it nor benefit from it. I would advise not to plan your whole future before you start your PhD, and if you do, to be flexible. Research especially, and life in general, often does not go according to plan.

However if the PhD is your goal in itself because you either love the subject or love learning and being a student, then grad school is the place for you. Only then can you be happy with your doctorate, regardless of whether you stay in academia or not.

Ultimately, these reasons are not enough. 3-5 years is a big commitment and must be made with extreme prejudice. As with everything else, only you can make the completely informed decision.

All I can say is that if 6 years ago I knew everything I know now, and had to choose… I would definitely still choose to do a PhD.


PS : This post is simply my opinion based on my subjective experience. Most of these experiences depend not only on factors such as university and field of study but also on who you have around as parents, friends, advisor or significant other, all of which I think I was quite lucky with.

Spoken like a Nobel laureate!!

I have heard quite a few keynote lectures from Nobel Prize winners in Physics or Chemistry. At least ten that I can remember, possibly more.

I am being flippant on purpose. Nobel laureate lectures aren’t all that. When I went to my first such lecture, I was all agog. I expected to hear a profound speech that would inspire generations of scientists to a lofty ideal. After all, aren’t the great Nobel lectures of the past the motivational quotes of today?

But my excitement faded pretty soon. Every Nobel laureate lecture was just a professor droning about his research, the science and the results, the theory and the experiment.

Nothing more. Nothing less.

Don’t get me wrong, every one of them was a brilliant scientist, who had built their research painstakingly and made a landmark contribution to science. I do not mean to undermine their scientific contribution at all. But I do not agree with their choice of keynote lecture topic.

A lecture at a conference is usually an opportunity for a scientist to showcase her research and publicize her results. This is the mechanism by which a scientist gathers traction, attracts collaborators and funding, and achieves visibility and credibility.

What does a Nobel laureate achieve?

Nobel laureates do not need visibility for their research. By virtue of the Prize, their research has gotten more exposure than anyone else’s. By virtue of the Prize, hundreds and thousands of introductory and advanced level articles have written, many man hours of airtime has been dedicated to publicize and explain their research.

By virtue of the Prize, Nobel laureates have been given a platform. They have been elevated and given an opportunity to be heard by an audience their peers will never be afforded. Instead of using this power, and responsibility, judiciously, most laureates squander it just explaining their research. That is not even selfish, its just utterly useless.

I heard Eric Betzig at CLEO15 at San Jose last month. He won the 2014 Nobel Prize in Chemistry for super-resolution microscopy. His lecture was the first, and only, one I have heard that was worth its Nobel salt.


Eric spent more time on mentioning the advantages and drawbacks of other contemporary work than on his own. He promoted research that currently going on, compared it with his work and pointed out why they might be more or less useful.

Even the time he spent on his own research, he used to tell his journey, his story. Every scientist can understand the equations, but starry eyed grad students and young professors alike go to a Nobel lecture to hear how… how were the discoveries made, what was the process, what was the struggle, how can they do it? His story of life in the last days of Bell labs, his frustrations with academia and failure as a businessman, painted a fascinating picture of a life of learning and hinted at the qualities required to be successful in research.

He also expressed his opinions of how research can or should be done and some of the pitfalls that befall academics and businessmen. None of that felt like a sermon, it was just part of story of Eric Betzig’s journey and lessons that he learnt. 

Here is ‘a’ lecture by Eric Betzig., the best Nobel laureate lecture I have ever heard. Its not the CLEO one, but the material is almost the same.

 Although he did say ‘Fuck you’ at CLEO, but this talk only has him say ‘goddamn, bitch’. How much do you have to achieve to be able to swear nonchalantly at ever formal conference? Thank you Eric for putting it to the test and re-establishing the worth of the Nobel Prize.

Taking on the Indian education juggernaut

Education is one of those things that someone is always complaining about… like taxes, politics and the Indian cricket team. Among the educated middle class especially, there is almost unanimous, and amusingly self-denying, agreement – everyone seems to believe that education is the solution to a majority of our problems, yet they also believe that the current system of education needs radical reform.

The first question is what should such reform entail. The upper crust of Indian school and undergraduate students score pretty high in tests compared to the rest of the world, especially in STEM fields. However there is, perhaps, too much emphasis on learning by rote. The vast majority of students who are in not in the upper crust never use the knowledge they learn in school. Rather education ends up being a race for degrees, and the holistic aim of making informed and intelligent citizens is lost.

In contrast, countries like the US err on the other extreme. The US school system does away with rote entirely, to such an extent that a student’s entire knowledge is conceptual, never actually tested or even put on paper. This method is very effective in higher education, when the students are mature, motivated and capable of testing themselves, whereas the discipline of Asian middle and high schools is demonstrably much better for younger students.

So there are no magic bullets. Given such a nuanced problem, the more important question is who can or should reform education. Should it be the government?


An idealist might believe that the people are the government and each citizen must make choices that enable the change that they want to see in society. Cynics might believe that the government is a toothless organization, a purely regulatory body constructed only to maintain the status quo and give society stability, incapable of innovation. Neither can realistically expect “the gourment” to magically do anything by itself. Only the lazy and intellectually dishonest can truly lay any sizeable responsibility for major social reform onto the government while they do nothing.

I claim that innovation and social change has to come from the people. Many take the onus upon themselves personally and become teachers and professors who impact the system, one student at a time. I have wanted to be a professor myself for the longest time, and I probably will… eventually.

However, the democratization of information sharing has allowed talented, motivated entrepreneurs, even with no capital or leverage, a shot at making a bigger difference. The explosion of startup culture is has suddenly placed societal change within the framework and reach of a rational common man.

Some might question the faith that I express in entrepreneurs. How can we count on fickle dreamers for social progress? A company that fundamentally redefines the way we think and learn would come once in a generation, how is that be trusted an engine of change?

Its true, entrepreneurs are not reliable. But harbingers of change never are. Most of the explorers who left Europe looking for India probably just died trying. We scarcely remember them and focus, rather unjustly, on the one person who finally did make it. But the future does not hinge on that one person. Rather as more and more explorers and entrepreneurs dare to bet their lives and careers on a vision, they pressure society to notice them, and indeed follow them, until eventually change is inevitable. And sure sometimes they might not find their destination and land up in a whole other continent. But in the process they would have found a new way of doing things, a new world, which will live and die by its merits. And if it survives it has the potential to change the future.


This Friday marked the launch of Laughuru. It is an education startup aimed at making learning fun, and more effective, for middle school children in India. I had a small part in its development from the inception. But its primarily the hard work of Vaibhav Devanathan, my classmate from IITB, and the great team he has built around himself. He is a true explorer, having left a lucrative offers at McKinsey and Harvard Business School to pursue a dream.

LaughGuru 1 Site Landing Page

Only time will tell how big an impact it will have. In the meantime, all an explorer can do is follow the compass in his head, read the stars and tackle what lies ahead. One rosy sunset, or one perfect storm, at a time.

Who can communicate science better : Scientists or Writers?

Nobel Prize winner Steven Weinberg recently wrote a piece in the Guardian which talks about the history of science and science communication. The post elicited a sharp response from Philip Ball on his blog. The two bring up an important point that is becoming increasingly relevant today : who communicates science better, scientists or writers?

The increased relevance, which presumably both would agree on, comes from the fact that it is unhealthy for a society if the general public becomes too divorced from the knowledge of the current state of science. After all, science and technology is a crucial driver of progress of society and a populace that does not understand why it is important cannot devote it resources appropriately.


I have been interested in science communication for a long time. This blog is mostly practice for just that. To that end, I follow many popular science writers (usually journalists or writers, who have a passion for understanding and popularizing science) and scientist writers ( scientists who at some point in their career devote more time to communicating science).

Even though both these groups have a common objective, scientists are often quite critical of writers. And I can see why. I have clicked many a sensational headline that fizzled into an article that either did not justify the heading or just seemed like a new and misleading spin on an old idea. However, I also know a few writers who do a decent job of bringing science news in popular media such as Twitter, where scientists lack presence.

I don’t understand how Philip can disagree with Weinberg when he says “mathematics is the main obstacle in explaining cutting edge science to the general public”. I think it is because when Weinberg says science, he really means physics. Though that is definitely misleading and maybe incorrect, it is hardly unexpected since physics is Weinberg’s area of expertise. He might assume that his frustrations with respect to science communication in physics is something that experts in other fields might feel as well. Perhaps it is not felt as acutely by, say, biology since biology is not as abstractly mathematical as physics. Words can do a lot of justice to concepts in biology.

However, I think abstract math, be it string theory or wave-functions, is harder to explain in English. I personally think quantum physics cannot be explained without mathematics. If I say in English that objects walk through walls, that is as meaningless as saying I am the King of England. Only with the mathematics can I explain that objects can walk through walls in a manner that doesn’t require the listener to ad hoc trust me. “Explain” rather than just “tell”.


Philip however says that science writers can explain anything without math. Perhaps he is right, perhaps that is why we need science writers. The few times I have tried, I did Crash and burn. 😛

XKCD : The Greatest God

While Randall Munroe’s giant posters have become the more popular format, for me the true genius of XKCD has always been its ability to break down a really complex concept into a single frame and few words. The latest comic exemplifies it –

A God who holds the record for eating the most skateboards is greater than the God who does not hold that record

The premise of an all powerful God rapidly spirals out of control. If God is all powerful, wouldn’t an incomprehensible God indifferent to humanity be greater than a petulant, jealous, vengeful one. Or even a merciful and just one, after all those qualities are just as human.

A God with violable rules and methods, emotions and enemies is not really all powerful. A truly all powerful God would just ‘be’, without a care for who disobeys him. In fact, a God who can be disobeyed must be lesser than a God who can never be defied. At which point there becomes no real distinction between God and the fabric of reality, the truly inescapable laws of physics and existence.

Alternatively, if God does have all those human qualities, then he cannot be all powerful. Then in a sense, isn’t God more like an advanced civilization/entity whose knowledge/technology seemed magical and supernatural to our primitive minds. In which case, isn’t it reasonable to wonder, if we humans could build a drill to pierce the heavens? (yes, you should watch Gurren Lagann)

First World Problems : Should Robots Have Gender?

Even with all the clamor around the documentary India’s daughters, the aftermath of rape in India and women’s rights in Muslim countries, I did not write anything it. Primarily because there is a surfeit of smart women who can and did talk about the issue themselves, they don’t need a middle class, Tambrahm dude preaching about what he imagines its like to be oppressed by the hypocritical patriarchy. I think I contributed more by liking their posts than by writing one myself.

It would have been funny if it were not true.

Having said that I do consider myself quite an expert in philosophical first world problems and their hypothetical impact on society. So when Slate ran an article exploring the motivations, nuances and consequences of assigning gender to robots, I am jumping on the chance to talk about gender in a totally noncommittal, never-been-affected-by-it, you-go-gals-I’l-wait-here context.

The key point of the article is this : whether we want to or not, we unconsciously assign humanity, and gender to our environment,usually based on prevailing societal stereotypes.
For example : robots with angular construction, darker colors seem more masculine to people, as do one that are used in strength-intensive functions such as lifting or construction.
Robots with lighter colors, curvy designs and intended for calmer functions such as those involved in healthcare and teaching are usually identified by people as female.
Guess which one is male and which one female. And go watch Wall-E.
This obvious stereotyping occurs even if the robots do not have an interactive voice, or even a “face”.
The article goes to on to describe NASA’s answer to the DARPA Robotics Challenge- The Valkyrie DRC Robonaut. Built with the intention of replacing humans for tasks that are too dangerous, the robot has been given a female name and characteristics. The article praises NASA for doing this. However when NASA was asked specifically if the Valkyrie was intended to be female, NASA chose not to assign its robots gender.

In an ideal world, I think what NASA did was correct. Robots do not have gender. It is a slippery slope if you start assigning them one. If certain characteristics strike as masculine or feminine, it can be in the eye of the beholder. There need not be any explicit delineation of robot gender from their creator, just like buildings and cars are not required to be male or female, despite people’s individual preferences.

Unfortunately, we do not live in an ideal world.

Emotions and symbolism exert more power over our actions and beliefs than we would like to admit. As the Slate article says, “if robots are given female form only for designing sexbots and maids, and all the heavy lifting is done by male robots, what will it say about the humans who use these bots”?

Is it possible to prevent this from happening? I don’t think so. A private robot manufacturer will be free to design and label his product. I think sexbots and cleaningbots will be given the female form, simply because they might sell more. Even if we could legislate that all robots should be sexless, is it the right thing to do? It can be argued that a feminine design for a healthcare bot could actually be beneficial for a patient’s emotional and psychological recovery. 
Given such grey areas, it might be more practical to admit that whether we like it or not, many robots will be assigned genders, be it to augment their function, or just to augment their sales.

And in this imperfect world I have to agree with the article, we might need (and I can’t believe I am saying this) “strong female robot role models” for the same reason we have had to ‘promote’ women in science; to prevent prejudices and stereotypes from denying rights and opportunities from those who deserve it.

Multiverse and The Nature of Reality.

There is (was) a very interesting debate going on online about quantum physics and the nature of reality. You can follow it here and here and here and here.
The Many Worlds Interpretation of Quantum Physics is a very interesting and persuasive idea, there are an infinite number of parallel universe all of whom share a common origin and whose paths diverged at some point in the past. If you have never heard of it then check out the two videos I have embedded.
I do not want to comment on the details of the entire debate since most of the debators are vastly more experienced and qualified than me. However I will mention here my older post where I treat the issue in more detail
I do feel that in such online debates the crucial points from both sides are sometimes lost in translation. I agree with Philip Ball that MWI is saying nothing new and game changing. Rather it is just a alternative and completely compatible way of looking at nature of reality of the world. Also since it is indistinguishable experimentally from other viable interpretations, it is a philosophical curiosity and not scientific derivation. Carroll’s claims that MWI is self-evident and the ‘correct’ method therefore seems a little over-zealous to me. 
However I disagree with a couple of Philip’s objections to MWI, specifically that MWI erases personhood. If the universe is splitting practically at every instance, many of which contain a nearly identical version of you, then who is the real you. This is a (philosophical) problem. But MWI does not create this problem, it is inherent in Quantum physics. 
If the electron can go both right and left at the same time, which way did the electron really go? Consequently, the experimenter could be seeing the electron go left or right, should not the experimenter also be in a superposition, of having seen the electron go left and right at the same time. 
That is the basis of the Schrodinger cat paradox, can a cat be truly dead and alive at the same time? 
MWI simply says yes because the dead cat and alive cat live in two separate universes and our act of seeing the cat somehow splits us in one of those two universes and we can never see the other universe. 
The Copenhagen interpretation says there is only 1 universe, the cat is in a weird fuzzy dead and alive state and our measurement collapses it into a recognizable state, either pet or roadkill.
To me both views have their own brand of weird, and since they are experimentally indistinguishable to our current knowledge, equally valid. One universe with superpositions or multiple universes with no superpositions, either way quantum physics stays just as counter-intuitive and just as correct.
I don’t see any need to be dogmatic about a philosophical argument, and therefore I must agree with Philip since he seems to believe that as well.

Philosophy vs Science

In recent times we seem to be a entering a new argument characterizing our time, the one between Philosophy and Science. I have read a whole slew of articles and posts on this issue lately. Most of these posts have been in defense of Philosophy sparked by incidents where famous physicists like Stephen Hawking and Neil de Grasse Tyson seem to be disparaging and dismissing philosophy as a worthless enterprise. I recently read another post by a scientist sparking me to write this post providing a counter-view.
When Hawking or Feynman or NdGT criticize philosophy it seems disingenuous, since these minds have produced sentences of great philosophical earnest and profundity in recent times. I myself am incredibly partial to philosophical speculation and conjecture, this blog can easily convince you. Yet I find myself agreeing with these “critics” of philosophy. So I will pretend I understand what they mean by their dismissal and try to elucidate it.

Also I will limit myself to natural philosophy, philosophy that seeks to explain the natural world around us. For sure, there is philosophy of morality and behavior and politics and others that I am unaware of, but I presume no physicist is trying to comment on that, nor am I qualified to comment. 

Much of the debate between philosophy and physics seems to be muddled by the semantics of what philosophy is. Both science and natural philosophy can be said to be the love and pursuit of truth as it pertains to the natural world around us. Science, in fact, is a descendant of natural philosophy in the sense that all the old world fathers of science were philosophers of their time. For most defenders of philosophy, the shared heritage and eventual divergence seems to be a positive argument justifying why philosophy is relevant. I do not agree. Just because we have a different word to define two nuances of the pursuit of knowledge doesn’t mean they need to be done by two separate classes of people. 
The problem is that natural sciences have so far surpassed the realm of common knowledge that it is impossible to ask meaningful questions, much less answer them, without a long and intricate study of the natural sciences.

As a result, today there can only be two constructive kind of philosophers. 
1. Active scientists who are of an intellect and courage required to see the larger picture and ask questions that may or may not be easily answered. 
2. An individual who is well trained in science, enough to reach the frontier of human knowledge, but chooses to not engage in active scientific research preferring the more contemplative and speculative method of philosophy.

Regardless of which camp you are in, you qualify to be called a scientist. The designation of philosopher can be applied to the second class only with full knowledge that they are ex-scientists or at worst, amateur scientists, but never non-scientists. Any non-scientist, one who did not go through training in science simply cannot understand what is already known and therefore cannot even ask the right questions about the unknown.
The point is beautifully made in The God Delusion by Richard Dawkins, although about religion, not philosophy. Physicists and scientists often deflect unanswerable philosophical questions by saying that is not the purview of science but of theology. And Dawkins asks, why? What expertise does religion bring in the attempt to answer the question?

A similar question could be posed to philosophy. If you have an individual who is not trained in the science and is asked the question why the universe exists, how good can we really expect the answer to be? If you do not understand string theory or the standard model or relativistic field theory, what expertise about the nature of the universe can you bring to that question as a philosopher versus a scientist? My feeling, and perhaps the view of the famous scientists in question, is that a natural philosopher without training in science has nothing to add to this conversation. 

Another notable spat in the philosophy science tussle was the one between David Albert and Lawrence Krauss, wherein Albert criticizes Krauss’ book for dismissing that the philosophical question of why the universe is as it is compared to the scientific question of how the universe came to be as it is. I personally do not dismiss the question, in fact, I would side with Albert on his criticism of Krauss (I am not a fan of Krauss, that’s why I don’t count Krauss in the list of physicists I’m defending here). Albert is a professor of philosophy but has a PhD in theoretical physics. He represents the second method for practicing constructive philosophy. I am fairly sure his kind is not the one physicists have a problem with nor do I. 
Remember, Feynman, Hawking and NdGT are huge public faces of science and face vastly more of the general non-scientists populace than most others. I am sure they have to face questions everyday from “philosophers” who would like to stump scientists in an attempt to prove that science does not have all the answers. The anti-science ignorance is betrayed by the presumed non sequitur that if scientists cannot answer the question, someone else has to do it. Bringing us back to Dawkins’ question, who? Who is qualified to ask or attempt answers to these questions? A philosopher? The famous scientists, in my opinion, are dismissing amateur philosophy by non-scientists who can easily be misled into believing that a difficult question is a profound question.

As the post points out, Hawking, Feynman and NDT themselves are philosophers in many respects and we are all wiser for it. In fact philosophy, in so much as it is questioning every aspect of knowledge and attempting to formulate answers, is the fundamental building block of science. I believe philosophy of science should be a required learning for all scientists, to either excite and unleash their inner philosopher or at the very least, inform them of the thought process of the giants in their fields. Science encompasses and surpasses all of natural philosophy, one could say philosophy has grown into modern science, hence the observation that philosophy is dead and replaced by science. 
A final confusion here is the difference between philosophy is dead vs philosophy is unimportant. Latin and Sanskrit are dead, yet a study of these languages is essential in liguistics. They are essential to understand how the currently alive languages evolved and to understand broader aspect of the civilizations they thrived in. They also might provide insight into the future of language evolution and methods of language construction.

Similarly a study of philosophy is essential, not only to understand the history of science and philosophy but also to understand the evolution of human thought in the past, present and future. More importantly, philosophy for a non-scientist is introduction to scientific thought and for scientists, a source of great foresight and insight.

But natural philosophy, as a separate entity and not as an attribute of scientific thought, making any meaningful contribution in understanding the physical world is just as likely as Latin making a comeback as a practical language. 

A scientist with a philosophical bent is a great scientist. A scientist who is not a philosopher is a good scientist. A philosopher who is not a scientist (and not trained as one) is just “dopey”.