Monday 31 October 2016

Detecting bright spots in an image using Python and OpenCV

Detecting multiple bright spots in an image with Python and OpenCV


Figure 7: Detecting multiple bright regions in an image with Python and OpenCV.

Detecting multiple bright spots in an image with Python and OpenCV

Normally when I do code-based tutorials on the PyImageSearch blog I follow a pretty standard template of:
  1. Explaining what the problem is and how we are going to solve it.
  2. Providing code to solve the project.
  3. Demonstrating the results of executing the code.
This template tends to work well for 95% of the PyImageSearch blog posts, but for this one, I’m going to squash the template together into a single step.
I feel that the problem of detecting the brightest regions of an image is pretty self-explanatory so I don’t need to dedicate an entire section to detailing the problem.
I also think that explaining each block of code followed by immediately showing the output of executing that respective block of code will help you better understand what’s going on.
So, with that said, take a look at the following image:
Figure 1: The example image that we are detecting multiple bright objects in using computer vision and image processing techniques.
Figure 1: The example image that we are detecting multiple bright objects in using computer vision and image processing techniques (source image).
In this image we have five lightbulbs.
Our goal is to detect these five lightbulbs in the image and uniquely label them.
To get started, open up a new file and name it detect_bright_spots.py . From there, insert the following code:
Lines 2-7 import our required Python packages. We’ll be using scikit-image in this tutorial, so if you don’t already have it installed on your system be sure to follow these install instructions.
We’ll also be using imutils, my set of convenience functions used to make applying image processing operations easier.
If you don’t already have imutils  installed on your system, you can use pip  to install it for you:
From there, Lines 10-13 parse our command line arguments. We only need a single switch here, --image , which is the path to our input image.
To start detecting the brightest regions in an image, we first need to load our image from disk followed by converting it to grayscale and smoothing (i.e., blurring) it to reduce high frequency noise:
The output of these operations can be seen below:
Figure 2: Converting our image to grayscale and blurring it.
Figure 2: Converting our image to grayscale and blurring it.
Notice how our image  is now (1) grayscale and (2) blurred.
To reveal the brightest regions in the blurred image we need to apply thresholding:
This operation takes any pixel value p >= 200 and sets it to 255 (white). Pixel values < 200 are set to 0 (black).
After thresholding we are left with the following image:
Figure 3: Applying thresholding to reveal the brighter regions of the image.
Figure 3: Applying thresholding to reveal the brighter regions of the image.
Note how the bright areas of the image are now all white while the rest of the image is set to black.
However, there is a bit of noise in this image (i.e., small blobs), so let’s clean it up by performing a series of erosions and dilations:
After applying these operations you can see that our thresh  image is much “cleaner”, although we do still have a few left over blobs that we’d like to exclude (we’ll handle that in our next step):
Figure 4: Utilizing a series of erosions and dilations to help "clean up" the thresholded image by removing small blobs and then regrowing the remaining regions.
Figure 4: Utilizing a series of erosions and dilations to help “clean up” the thresholded image by removing small blobs and then regrowing the remaining regions.
The critical step in this project is to label each of the regions in the above figure; however, even after applying our erosions and dilations we’d still like to filter out any leftover “noisy” regions.
An excellent way to do this is to perform a connected-component analysis:
Line 32 performs the actual connected-component analysis using the scikit-image library. The labels  variable returned from measure.label  has the exact same dimensions as our thresh  image — the only difference is that labels  stores a unique integer for each blob in thresh .
We then initialize a mask  on Line 33 to store only the large blobs.
On Line 36 we start looping over each of the unique labels . If the label  is zero then we know we are examining the background region and can safely ignore it (Lines 38 and 39).
Otherwise, we construct a mask for just the current label  on Lines 43 and 44.
I have provided a GIF animation below that visualizes the construction of the labelMask  for each label . Use this animation to help yourself understand how each of the individual components are accessed and displayed:
Figure 5: A visual animation of applying a connected-component analysis to our thresholded image.
Figure 5: A visual animation of applying a connected-component analysis to our thresholded image.
Line 45 then counts the number of non-zero pixels in the labelMask . If numPixels  exceeds a pre-defined threshold (in this case, a total of 300 pixels), then we consider the blob “large enough” and add it to our mask .
The output mask  can be seen below:
Figure 6: After applying a connected-component analysis we are left with only the larger blobs in the image (which are also bright).
Figure 6: After applying a connected-component analysis we are left with only the larger blobs in the image (which are also bright).
Notice how any small blobs have been filtered out and only the large blobs have been retained.
The last step is to draw the labeled blobs on our image:
First, we need to detect the contours in the mask  image and then sort them from left-to-right (Lines 54-57).
Once our contours have been sorted we can loop over them individually (Line 60).
For each of these contours we’ll compute the minimum enclosing circle (Line 63) which represents the area that the bright region encompasses.
We then uniquely label the region and draw it on our image  (Lines 64-67).
Finally, Lines 70 and 71 display our output results.
To visualize the output for the lightbulb image be sure to download the source code + example images to this blog post using the “Downloads” section found at the bottom of this tutorial.
From there, just execute the following command:
You should then see the following output image:
Figure 7: Detecting multiple bright regions in an image with Python and OpenCV.
Figure 7: Detecting multiple bright regions in an image with Python and OpenCV.
Notice how each of the lightbulbs has been uniquely labeled with a circle drawn to encompass each of the individual bright regions.
You can visualize a a second example by executing this command:
Figure 8: A second example of detecting multiple bright regions using computer vision and image processing techniques (source image).
Figure 8: A second example of detecting multiple bright regions using computer vision and image processing techniques (source image).
This time there are many lightbulbs in the input image! However, even with many bright regions in the image our method is still able to correctly (and uniquely) label each of them.

Wednesday 12 October 2016

Send Passwords Securely Through Your Body Instead of Wi-Fi


Send Passwords Securely Through Your Body Instead of Wi-Fi Rather than rely on easy-to-hack Wi-Fi or Bluetooth signals, researchers have developed a system that uses the human body to securely transmit passwords.
Computer scientists and electrical engineers have devised a way to relay the signal from a fingerprint scanner or touchpad through the body to a receiving device that is also in contact with the user. These "on-body" transmissions offer a secure option for authentication that does not require a password, the researchers said.
"Let’s say I want to open a door using an electronic smart lock," said study co-lead author Merhdad Hessar, an electrical engineering doctoral student at the University of Washington. "I can touch the doorknob and touch the fingerprint sensor on my phone and transmit my secret credentials through my body to open the door, without leaking that personal information over the air." [Body Odor and Brain Waves: 5 Cool New ID Technologies]
The system uses signals that are already generated by fingerprint sensors on smartphones and laptop touchpads, which have thus far been used to receive input about the physical characteristics of a user's finger.
"What is cool is that we’ve shown for the first time that fingerprint sensors can be re-purposed to send out information that is confined to the body," study senior author Shyam Gollakota, an assistant professor of computer science and engineering at the University of Washington, said in a statement.
The researchers devised a way to use the signals that are generated by fingerprint sensors and touchpads as output, corresponding to data like a password or access code. Rather than transmitting sensitive data "over the air" to a receiving device, the system allows that information to travel securely through the body to a receiver that's embedded in a device that needs authentication.
In tests so far, the system worked with iPhones, Lenovo laptop trackpads and the Adafruit touchpad (a trackpad that can be used with computers). The tests were successful with 10 people who had different heights, weights and body types, and worked when the subjects were in different postures or in motion. The on-body transmissions reached bit rates of 50 bps for the touchpads and 25 bps for the phone sensors — fast enough for a simple password or numerical code. Bit rates measure the amount of data that can be transmitted per second, with higher rates representing more data (for instance, a small file rather than a simple password).
On-body transmissions could also be applied to medical devices, such as glucose monitors or insulin pumps, which require secure data sharing to confirm the patient's identity, according to the researchers.
Once they have more access to the software used by fingerprint sensor manufacturers, the researchers aim to continue researching how to provide greater and faster transmission options.
The technology is described in a study that was published online Sept. 12 in the Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing.

compund tcp increase the speed of normal tcp in windows 7

Microsoft have implemented their own homemade congestion provider as an alternative to the standard TCP congestion provider. It is called Compound TCP (CTCP) and attempts to help certain connection types where TCP Slow Start takes forever:
  • High bandwidth connections requiring very large receive windows (RWIN)
  • Lossy connections requires lots of retransmission if the RWIN is too large.
These types of connection are becoming more common, as more international companies want to connect their different long distance offices with high speed connections.

TCP slow start is a way to probe the network connection, where one "slowly" increases the TCP Send Window as one verifies that the network connection can handle it. If retransmissions are required then it will slow down the growth of the TCP Send Window. If having a high bandwidth and high latency connection, then it might take an hour or more for TCP to make use of the full bandwidth. CTCP allows the TCP Send Window to grow faster, even if retransmissions are needed, but only if it detects that network connection can handle it.

To change the congestion provider to CTCP (default on Windows 2008):
netsh interface tcp set global congestionprovider=ctcp
To revert the congestion provider to default TCP (default on Windows Vista):
netsh interface tcp set global congestionprovider=none