Originally published on Ribbrish
I seem to have developed something like a writer’s(though I agree that I am as far from a writer as one could be) block. The last time i updated this blog, I started with a good idea and wrote a bad story out of it. I have been thinking about this other story idea for months now but am not able to write it down simply because i am too lazy to do it. Instead, let me bore you by doing exactly what this article professes as a cardinal sin, broadcast half baked information.
Collaborative sensing, or sensing via wireless sensor networks is not exactly a new idea. For the uninitiated, wireless sensor networks can be thought of as minuscule devices that collect data about their surroundings and report it to some other device that gathers all that data. For example think of a shopping center with multiple entry points, and sensors put under entry to measure the total footfall during the day. With all the sensors sending reporting the footfall to a data collection unit. (No, do not think of Mr. Bean dancing on one of these sensors and sabotaging the entire system.) Or think of a huge industrial furnace that has to maintain a specific temperature, and there are sensors placed inside it to measure the temperature periodically (You cannot expect a person to standing in there) and report these temperatures wirelessely to a control center where suitable action can be taken.
One important thing about the sensor networks is that the sensors being used in them are generally cheap. Cheap here means their quality is not what one would call “top class”. Therefore, the individual nodes can be rather inaccurate in their estimates. However, collectively they tend to be pretty accurate. (Remember the birds caught in the net in that panchtantra story, or the fish in Finding Nemo).
Over the past years many communication engineers (actual ones and not the ones who write blogs sitting in the lab) have tried to maximize the performance of such networks and found that under suitable conditions, a sensor can actually measure the accuracy of its estimate. It has also been found that the overall system performance can be substantially improved if the nodes with worse quality estimates contribute less to the overall system output. Which also makes intuitive sense. Whenever a sensor’s estimation quality falls below a certain level, it abstains from transmitting its results to the system as a low quality estimate can actually lead to an incorrect final measurement and do more harm than good. This abstinence of sensors from sharing their estimate is generally known as the censoring mode.
I however propose that alongwith the censoring mode, the sensors in wireless sensor networks should have a social media mode as well. That is, the worse is a sensors estimate quality the higher the power(louder) with which it should communicate to the data center, moreover, any sensor whose measurements differ from the sensor under question should be labeled either as a “Bhakt” or a “Libtard” and therefore the sensor must push the data center to avoid all contact with sensors with different observations. Moreover, some of the sensors should write articles like, “5 reasons why high noise measurements that agree with us totally nail it” or “Why should the data center be wary of all other measurements”. Or maybe the sensors should take just one measurement in their lifetime and then spend the rest of the time explaining why theirs is the only measurement taken correctly and the sensors with contradicting views are either corrupt or cowards and psychopaths.
That’s all folks
P.S. The above article is a technical idea and does not intend to make fun of the luminaries who spend all their time on social media convincing the (a fraction of the ) world of their ideas. Any similarity is purely coincidental