Thursday 1 December 2016

Western Digital releases series of Raspberry Pi disk drives




wd labs raspberry pidrive foundation
WD's 256GB hard drive for Raspberry Pi computers. The kit includes a USB splitter cable and a microSD card preloaded with the custom New Out of Box Software (NOOBS) OS installer. Credit: WD

Wednesday 23 November 2016

New Supercapacitor Tech Produces Batteries That Charge in Seconds, Last for Days

New Supercapacitor Tech Produces Batteries That Charge in Seconds, Last for Days

 
New Supercapacitor Tech Produces Batteries That Charge in Seconds, Last for Days
The long hours that your smartphone takes to charge may soon become a thing of the past, as scientists, including one of Indian-origin, have developed a new process to make electronic devices charge in seconds.
The researchers at University of Central Florida (UCF) in the US have developed a process to create flexible supercapacitors that have more energy storage capacity and can be recharged more than 30,000 times without beginning to degrade.
"If they were to replace the batteries with these supercapacitors, you could charge your mobile phone in a few seconds and you wouldn't need to charge it again for over a week," said Nitin Choudhary, a postdoctoral associate at UCF.
These supercapacitors that are still proof-of-concept could be used in phones and other electronic gadgets, and electric vehicles, said the study published in journal ACS Nano.
Anyone with a smartphone knows the problem. After 18 months or so, it holds a charge for less and less time as the battery begins to degrade.
Scientists have been studying the use of nanomaterials to improve supercapacitors that could enhance or even replace batteries in electronic devices. It is a stubborn problem, because a supercapacitor that held as much energy as a lithium-ion battery would have to be much, much larger.So the team experimented with applying newly discovered two-dimensional materials only a few atoms thick to supercapacitors. Other researchers have also tried formulations with graphene and other two-dimensional materials, but with limited success.
"There have been problems in the way people incorporate these two-dimensional materials into the existing systems - that's been a bottleneck in the field. We developed a simple chemical synthesis approach so we can very nicely integrate the existing materials with the two-dimensional materials," said principal investigator Yeonwoong "Eric" Jung, Assistant Professor at UCF.
Scientists already knew two-dimensional materials held great promise for energy storage applications. But until the UCF-developed process for integrating those materials, there was no way to realize that potential, Jung said.
"For small electronic devices, our materials are surpassing the conventional ones worldwide in terms of energy density, power density and cyclic stability," Choudhary pointed out.
Cyclic stability defines how many times it can be charged, drained and recharged before beginning to degrade.
For example, a lithium-ion battery can be recharged fewer than 1,500 times without significant failure. By comparison, the new process created by the researchers yields a supercapacitor that does not degrade even after it has been recharged 30,000 times.

Jaguar files patent for vehicle access system with facial recognition and gait analysis


Jaguar Land Rover has filed a U.S. patent for technology that uses facial recognition and gait analysis to unlock doors,
Published on October 13, the patent application for “Door access system for a vehicle” technology describes cameras mounted under the windows of the doors, which would take both video and still images of individuals approaching from the front or from behind.
The technology would then match images to those stored on the car’s database using gait or movement recognition technology, and unlock the doors if the system detects the car owner approaching.
“The user of the vehicle must carry out a registration process which requires them to record a still image of their face and a moving image such as a hand gesture or their gait as they approach the vehicle,” the patent states.
The patent application also states that the cameras would take a second picture when the car owner is standing beside the car and the facial recognition software would compare this to images stored on the system’s database. If the two images match the system would automatically unlock the doors.
The system could also be combined with wireless key fobs as an additional security measure, the patent states.
The combined use of video footage and gait recognition analysis ensures that would-be thieves cannot trick the system by holding up a printed image of the car owner’s face.
The patent application unveils Jaguar’s plans to use stereoscopic cameras that will capture a 3D image, which would allow the system to gauge how far an individual is from the vehicle as well as helping to analyze their movement.

The application also alludes to the fact that future Jaguar and Land Rover models may not have any door handles as the vehicle’s doors would automatically open when the car recognizes its owner approaching.
“The moving image may be a gesture, such as a hand wave, a salute or another hand signal which the user makes on approach to or arrival at the vehicle,” the patent states. “A still more sophisticated embodiment may use discrimination between different gestures to unlock different doors of the vehicles.”
The patent also describes how the system could learn to recognize multiple users so that family members can share a car.
Jaguar also states in the patent that the new technology will make cars more secure and more convenient in the event that drivers lose their keys.
“It is an ongoing challenge of the automotive industry to improve vehicle functionality and design and to further enhance the sophisticated feel of vehicles without additional cost,” Jaguar states in the patent. “In particular, vehicle personalization, where vehicle functions and features can be aligned with specific user requirements is an increasingly common aim. As far as door entry is concerned, such systems must also be robust against mis-use, for example theft or loss of a key-fob so that vehicle security is maintained.”
Frost & Sullivan Intelligent Mobility recently released a new report entitled ‘Biometrics in the Global Automotive Industry, 2016–2025‘, which forecasts that ongoing advancements in biometrics will significantly transform the driving experience, health wellness and well-being (HWW), and security of vehicles by 2025.

Monday 31 October 2016

Detecting bright spots in an image using Python and OpenCV

Detecting multiple bright spots in an image with Python and OpenCV


Figure 7: Detecting multiple bright regions in an image with Python and OpenCV.

Detecting multiple bright spots in an image with Python and OpenCV

Normally when I do code-based tutorials on the PyImageSearch blog I follow a pretty standard template of:
  1. Explaining what the problem is and how we are going to solve it.
  2. Providing code to solve the project.
  3. Demonstrating the results of executing the code.
This template tends to work well for 95% of the PyImageSearch blog posts, but for this one, I’m going to squash the template together into a single step.
I feel that the problem of detecting the brightest regions of an image is pretty self-explanatory so I don’t need to dedicate an entire section to detailing the problem.
I also think that explaining each block of code followed by immediately showing the output of executing that respective block of code will help you better understand what’s going on.
So, with that said, take a look at the following image:
Figure 1: The example image that we are detecting multiple bright objects in using computer vision and image processing techniques.
Figure 1: The example image that we are detecting multiple bright objects in using computer vision and image processing techniques (source image).
In this image we have five lightbulbs.
Our goal is to detect these five lightbulbs in the image and uniquely label them.
To get started, open up a new file and name it detect_bright_spots.py . From there, insert the following code:
Lines 2-7 import our required Python packages. We’ll be using scikit-image in this tutorial, so if you don’t already have it installed on your system be sure to follow these install instructions.
We’ll also be using imutils, my set of convenience functions used to make applying image processing operations easier.
If you don’t already have imutils  installed on your system, you can use pip  to install it for you:
From there, Lines 10-13 parse our command line arguments. We only need a single switch here, --image , which is the path to our input image.
To start detecting the brightest regions in an image, we first need to load our image from disk followed by converting it to grayscale and smoothing (i.e., blurring) it to reduce high frequency noise:
The output of these operations can be seen below:
Figure 2: Converting our image to grayscale and blurring it.
Figure 2: Converting our image to grayscale and blurring it.
Notice how our image  is now (1) grayscale and (2) blurred.
To reveal the brightest regions in the blurred image we need to apply thresholding:
This operation takes any pixel value p >= 200 and sets it to 255 (white). Pixel values < 200 are set to 0 (black).
After thresholding we are left with the following image:
Figure 3: Applying thresholding to reveal the brighter regions of the image.
Figure 3: Applying thresholding to reveal the brighter regions of the image.
Note how the bright areas of the image are now all white while the rest of the image is set to black.
However, there is a bit of noise in this image (i.e., small blobs), so let’s clean it up by performing a series of erosions and dilations:
After applying these operations you can see that our thresh  image is much “cleaner”, although we do still have a few left over blobs that we’d like to exclude (we’ll handle that in our next step):
Figure 4: Utilizing a series of erosions and dilations to help "clean up" the thresholded image by removing small blobs and then regrowing the remaining regions.
Figure 4: Utilizing a series of erosions and dilations to help “clean up” the thresholded image by removing small blobs and then regrowing the remaining regions.
The critical step in this project is to label each of the regions in the above figure; however, even after applying our erosions and dilations we’d still like to filter out any leftover “noisy” regions.
An excellent way to do this is to perform a connected-component analysis:
Line 32 performs the actual connected-component analysis using the scikit-image library. The labels  variable returned from measure.label  has the exact same dimensions as our thresh  image — the only difference is that labels  stores a unique integer for each blob in thresh .
We then initialize a mask  on Line 33 to store only the large blobs.
On Line 36 we start looping over each of the unique labels . If the label  is zero then we know we are examining the background region and can safely ignore it (Lines 38 and 39).
Otherwise, we construct a mask for just the current label  on Lines 43 and 44.
I have provided a GIF animation below that visualizes the construction of the labelMask  for each label . Use this animation to help yourself understand how each of the individual components are accessed and displayed:
Figure 5: A visual animation of applying a connected-component analysis to our thresholded image.
Figure 5: A visual animation of applying a connected-component analysis to our thresholded image.
Line 45 then counts the number of non-zero pixels in the labelMask . If numPixels  exceeds a pre-defined threshold (in this case, a total of 300 pixels), then we consider the blob “large enough” and add it to our mask .
The output mask  can be seen below:
Figure 6: After applying a connected-component analysis we are left with only the larger blobs in the image (which are also bright).
Figure 6: After applying a connected-component analysis we are left with only the larger blobs in the image (which are also bright).
Notice how any small blobs have been filtered out and only the large blobs have been retained.
The last step is to draw the labeled blobs on our image:
First, we need to detect the contours in the mask  image and then sort them from left-to-right (Lines 54-57).
Once our contours have been sorted we can loop over them individually (Line 60).
For each of these contours we’ll compute the minimum enclosing circle (Line 63) which represents the area that the bright region encompasses.
We then uniquely label the region and draw it on our image  (Lines 64-67).
Finally, Lines 70 and 71 display our output results.
To visualize the output for the lightbulb image be sure to download the source code + example images to this blog post using the “Downloads” section found at the bottom of this tutorial.
From there, just execute the following command:
You should then see the following output image:
Figure 7: Detecting multiple bright regions in an image with Python and OpenCV.
Figure 7: Detecting multiple bright regions in an image with Python and OpenCV.
Notice how each of the lightbulbs has been uniquely labeled with a circle drawn to encompass each of the individual bright regions.
You can visualize a a second example by executing this command:
Figure 8: A second example of detecting multiple bright regions using computer vision and image processing techniques (source image).
Figure 8: A second example of detecting multiple bright regions using computer vision and image processing techniques (source image).
This time there are many lightbulbs in the input image! However, even with many bright regions in the image our method is still able to correctly (and uniquely) label each of them.

Wednesday 12 October 2016

Send Passwords Securely Through Your Body Instead of Wi-Fi


Send Passwords Securely Through Your Body Instead of Wi-Fi Rather than rely on easy-to-hack Wi-Fi or Bluetooth signals, researchers have developed a system that uses the human body to securely transmit passwords.
Computer scientists and electrical engineers have devised a way to relay the signal from a fingerprint scanner or touchpad through the body to a receiving device that is also in contact with the user. These "on-body" transmissions offer a secure option for authentication that does not require a password, the researchers said.
"Let’s say I want to open a door using an electronic smart lock," said study co-lead author Merhdad Hessar, an electrical engineering doctoral student at the University of Washington. "I can touch the doorknob and touch the fingerprint sensor on my phone and transmit my secret credentials through my body to open the door, without leaking that personal information over the air." [Body Odor and Brain Waves: 5 Cool New ID Technologies]
The system uses signals that are already generated by fingerprint sensors on smartphones and laptop touchpads, which have thus far been used to receive input about the physical characteristics of a user's finger.
"What is cool is that we’ve shown for the first time that fingerprint sensors can be re-purposed to send out information that is confined to the body," study senior author Shyam Gollakota, an assistant professor of computer science and engineering at the University of Washington, said in a statement.
The researchers devised a way to use the signals that are generated by fingerprint sensors and touchpads as output, corresponding to data like a password or access code. Rather than transmitting sensitive data "over the air" to a receiving device, the system allows that information to travel securely through the body to a receiver that's embedded in a device that needs authentication.
In tests so far, the system worked with iPhones, Lenovo laptop trackpads and the Adafruit touchpad (a trackpad that can be used with computers). The tests were successful with 10 people who had different heights, weights and body types, and worked when the subjects were in different postures or in motion. The on-body transmissions reached bit rates of 50 bps for the touchpads and 25 bps for the phone sensors — fast enough for a simple password or numerical code. Bit rates measure the amount of data that can be transmitted per second, with higher rates representing more data (for instance, a small file rather than a simple password).
On-body transmissions could also be applied to medical devices, such as glucose monitors or insulin pumps, which require secure data sharing to confirm the patient's identity, according to the researchers.
Once they have more access to the software used by fingerprint sensor manufacturers, the researchers aim to continue researching how to provide greater and faster transmission options.
The technology is described in a study that was published online Sept. 12 in the Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing.

compund tcp increase the speed of normal tcp in windows 7

Microsoft have implemented their own homemade congestion provider as an alternative to the standard TCP congestion provider. It is called Compound TCP (CTCP) and attempts to help certain connection types where TCP Slow Start takes forever:
  • High bandwidth connections requiring very large receive windows (RWIN)
  • Lossy connections requires lots of retransmission if the RWIN is too large.
These types of connection are becoming more common, as more international companies want to connect their different long distance offices with high speed connections.

TCP slow start is a way to probe the network connection, where one "slowly" increases the TCP Send Window as one verifies that the network connection can handle it. If retransmissions are required then it will slow down the growth of the TCP Send Window. If having a high bandwidth and high latency connection, then it might take an hour or more for TCP to make use of the full bandwidth. CTCP allows the TCP Send Window to grow faster, even if retransmissions are needed, but only if it detects that network connection can handle it.

To change the congestion provider to CTCP (default on Windows 2008):
netsh interface tcp set global congestionprovider=ctcp
To revert the congestion provider to default TCP (default on Windows Vista):
netsh interface tcp set global congestionprovider=none

Monday 26 September 2016

Smartphone locks cracked by Israel's Cellebrite (Iphone and android)

Meeting Cellebrite - Israel's master phone crackers

It's an Israeli company that helps police forces gain access to data on the mobile phones of suspected criminals.
Cellebrite was in the headlines earlier this year when it was rumoured to have helped the FBI to crack an iPhone used by the San Bernardino shooter.
Now the company has told the BBC that it can get through the defences of just about any modern smartphone. But the firm refuses to say whether it supplies its technology to the police forces of repressive regimes.
Last week Cellebrite was showing off its technology to British customers. I was invited to a hotel in the Midlands, where police officers from across the UK had come to see equipment and software that first extracts data from suspects' phones, then analyses how they interact with others.
Rory Cellan-Jones taking photos
I was given a demo using a Samsung phone supplied by the company. It was running quite an old version of Android - 4.2 - but I was allowed to take it away for half an hour, put a password on it, and use it to take photos and send a text message.
When we returned, Yuval Ben-Moshe from Cellebrite took the phone and simply plugged it in via the charging socket to what looked like a chunky tablet computer. He explained that this was the kind of mobile unit the firm supplied to police forces for data extraction in the field.
He pressed a couple of buttons on the screen and then announced that the phone's lock code had been disabled.
"We can pretty much pull up any of the data that resides on the phone," he said.
He then downloaded the photos I'd taken and the message I'd sent on to a USB stick - the evidence of my activities could now be in the hands of the police.
It was impressive, not to say slightly concerning, that the security on the phone had been so easily bypassed - although this was not a particularly advanced phone, nor had I used services such as WhatsApp, which provide added levels of security.
Samsung phone connected to a computer
But Mr Ben-Moshe claimed that his firm could access data on "the largest number of devices that are out there in the industry".
Even Apple's new iPhone 7?
"We can definitely extract data from an iPhone 7 as well - the question is what data."
He said that Cellebrite had the biggest research and development team in the sector, constantly working to catch up with the new technology.
He was cagey about how much data could be extracted from services such as WhatsApp - "It's not a black/white yes/no answer" - but indicated that criminals might be fooling themselves if they thought any form of mobile communication was totally secure.
Back in the spring, there were reports that Cellebrite had helped the FBI get into the iPhone 5C left behind by the San Bernardino shooter Syed Rizwan Farook.
Unsurprisingly, Mr Ben-Moshe had nothing to say on this matter: "We cannot comment on any of our customers."

And on the matter of how fussy Cellebrite was about the customers for equipment that is used by law enforcement agencies around the world, he was also tight-lipped.
When I asked whether the company worked with oppressive governments he said: "I don't know the answer to that and I'm in no position to comment on that." And when I pressed him, he would say only that Cellebrite operated under international law and under the law of every jurisdiction where it worked.
Mobile phone companies are making great advances in providing secure devices - and law enforcement agencies in the UK and the US are complaining that this is helping criminals and terrorists evade detection.
But last month another Israeli firm NSO Group, which also works for law enforcement and intelligence agencies, was reported to be behind a hack that allowed any iPhone to be easily "jailbroken" and have malware installed.
It seems the technology battle between the phone makers and those trying to penetrate their devices - for good reasons or bad - is a more even fight than we may have imagined.

Wednesday 21 September 2016

Creating android application using java eclipse

Setting up Android Development Environment


Installing Android SDK

In order to start developing for Android you need the Software Development Kit. You can download it for WindowsLinux or for Mac OS X.
Once downloaded you have to install it, on Windows just start the executable file.

Installing Java JDK and Eclipse

The Java Development Kit is needed to develop Android applications since Android is based on Java and XML. Writing Android code is being done using an editor, the best supported ,and in my opinion, the best one around is Eclipse. Eclipse is an opensource freeware editor that is capable of supporting a wide range of programming languages.

Installing the ADT Plugin

Once Eclipse is installed we need to connect the Android SDK with Eclipse, this is being done by the ADT Plugin. Installing this plugin is easily done using eclipse.
  1. Start Eclipse. Navigate in the menu to Help > Install new software..
  2. Press ‘ Add..’,  in the new window that pops up you can fill in Name with an arbitrary name. A good suggestion could be “Android Plugin” and in the location you have to paste :
  3. Click ‘Ok’. Make sure the checkbox for Developer Tools is selected and click “Next”.
  4. Click ‘Next’. Accept al the license agreements, click ‘Finish’ and restart Eclipse.
  5. To configure the plugin : choose Window > Preferences
  6. Select ‘Android’ on the left panel and browse for the Android SDK you downloaded in the first step. (On windows : C:\Program Files (x86)\Android\android-sdk)
  7. Click apply and you’re ready and ok !

Adding platforms and components

On windows, start the SDKManager.exe . Located in C:\Program Files (x86)\Android\android-sdk and install all platforms and components.

 

create New AVD

Follow the steps below to create a new AVD,
  • In Eclipse, select Window -> AVD Manager.
  • Click New…

The Create New AVD dialog appears.
  • Type the name of the AVD, for example “first_avd“.
  • Choose a target.
    • The target is the platform (that is, the version of the Android SDK, such as 2.3.3) you want to run on the emulator. It can be either Android API or Google API.
  • [Optional] an SD card size, say 400
  • [Optional] Snapshot. Enable this to make start up of emulator faster.
    • To test this, enable this option. Get the emulator up and running and put it into the state you want. For example, home screen or menu screen. Now close the emulator. Run it again and now the emulator launches quickly with the saved state.
  • [Optional] Skin.
  • [Optional]You can add specific hardware features of the emulated device by clicking the New… button and selecting the feature. Refer this link for more details on hardware options.
  • Click Create AVD.

Launch Android AVD (Emulator)

After creating a new Android AVD, you can launch it using “Android AVD Manager”.
Open “Android AVD Manager” either from Installation directory or Eclipse IDE and follow the steps as shown below.

how to create the simple hello android application.


In this page, you will know how to create the simple hello android application. We are creating the simple example of android using the Eclipse IDE. For creating the simple example:
  1. Create the new android project
  2. Write the message (optional)
  3. Run the android application

Hello Android Example

You need to follow the 3 steps mentioned above for creating the Hello android application.

1) Create the New Android project

For creating the new android project:
1) Select File > New > Project...
2) Select the android project and click next
hello android example 3) Fill the Details in this dialog box and click finish
hello android example Now an android project have been created. You can explore the android project and see the simple program, it looks like this:
hello android example

2) Write the message

For writing the message we are using the TextView class. Change the onCreate method as:
  1. TextView textview=new TextView(this);  
  2. textview.setText("Hello Android!");  
  3. setContentView(textview);  
Let's see the full code of MainActivity.java file.
  1. package com.example.helloandroid;  
  2. import android.os.Bundle;  
  3. import android.app.Activity;  
  4. import android.view.Menu;  
  5. import android.widget.TextView;  
  6. public class MainActivity extends Activity {  
  7.     @Override  
  8.     protected void onCreate(Bundle savedInstanceState) {  
  9.         super.onCreate(savedInstanceState);  
  10.         TextView textview=new TextView(this);  
  11.         textview.setText("Hello Android!");  
  12.         setContentView(textview);  
  13.     }  
  14.     @Override  
  15.     public boolean onCreateOptionsMenu(Menu menu) {  
  16.         // Inflate the menu; this adds items to the action bar if it is present.  
  17.         getMenuInflater().inflate(R.menu.activity_main, menu);  
  18.         return true;  
  19.     }  
  20. }  

To understand the first android application, visit the next page (internal details of hello android example).


3) Run the android application

To run the android application: Right click on your project > Run As.. > Android Application
The android emulator might take 2 or 3 minutes to boot. So please have patience. After booting the emulator, the eclipse plugin installs the application and launches the activity. You will see something like this:
hello android example