Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
eye phone seminars report wikipedia
#1

Abstract
As smartphones evolve, researchers are studying new techniques to facilitate human-mobile interaction. We propose EyePhone, a new "hand-free" interface system capable of handling applications / mobile functions using only the movement and actions of the user's eyes (eg, wink).

EyePhone tracks the movement of the user's eyes through the phone's screen using the camera mounted on the front of the phone; More specifically, machine learning algorithms are used to: i) track the eye and infer its position on the mobile phone screen when a user views a particular application; And ii) detect flashes in the eyes that emulate mouse clicks to activate the target application under view. We introduce a prototype implementation of EyePhone on a Nokia N810, which is able to track the position of the eye on the screen, assigning these positions to an application that is activated by a wink. At no time does the user have to physically touch the screen of the phone.

Human-Computer Interaction (HCI) researchers and telephone providers are continually seeking new approaches to reduce the effort that users exercise when accessing applications on limited form factor devices such as mobile phones. The most significant innovation in recent years is the adoption of touch screen technology introduced with Apple's iPhone and recently followed by all other major providers such as Nokia and HTC . The touchscreen has changed the way people interact with their mobile phones as it provides an intuitive way to perform actions using the movement of one or more fingers on the screen (for example, pinching a photo to zoom in or out or scroll To move a map).

Man-phone interaction
Human-telephone interaction represents an extension of the HCI field, as HPI presents new challenges that need to be specifically addressed by mobility issues, the form factor of the telephone and its resource constraints (eg energy and computing). More specifically, the distinguishing factors of the mobile telephony environment are mobility and the lack of sophisticated hardware support, ie specialized headsets, overhead cameras and dedicated sensors, which are often required for HCI applications. In what follows, we discuss these issues. Challenges of mobility. One of the immediate products of mobility is that a mobile phone moves around unpredictable context, ie situations and scenarios that are difficult to see or predict during the design phase of an HPI application.

A mobile phone is subject to uncontrolled movements, that is, people interact with their mobile phones while they are stationary, moving, etc. It is almost impossible to predict how and where people will use their mobile phones. An HPI application must be able to function reliably under any conditions found. Consider the following examples: two HPI applications, one using the accelerometer, the other depending on the phone's camera. Imagine exploiting the accelerometer to infer some simple gestures a person can make with the phone in their hands, for example, shaking the phone to start a phone call or ringing the phone to reject a phone call.

What is challenging is to be able to distinguish between the gesture itself and any other action that the person is performing. For example, if a person is running or if a user throws his phone on a couch, a sudden shake of the phone could produce signatures that could easily be mistaken for a gesture. There are many examples where a classifier could be easily confused. In response, the wrong actions could be activated on the phone. Similarly, if the phone camera is used to infer a user action [5] [9], it is important that the inference algorithm operating on the video captured by the camera is resistant to the lighting conditions, which can Vary from place to place. In addition, video frames are erased due to the movement of the phone.

Because HPI application developers can not assume optimum operating conditions (ie, users operate in an idealized manner) before detecting gestures in this example (eg, Force a user to stop walking or running Before initiating a phone call moving) The effects of mobility must be taken into account in order for the HPI application to be reliable and scalable. Hardware Challenges. Unlike HCI applications, any HPI implementation should not depend on any external hardware. Asking people to wear or use additional hardware to use their handset could reduce the penetration of technology.

In addition, state-of-the-art HCI hardware, such as glass-mounted cameras or dedicated helmets, are not small enough to be used by people for long periods of time. Any HPI application must rely as much as possible on only the sensors on board the phone. Although modern smartphones are increasingly capable computationally, they are still limited when running complex machine learning algorithms . HPI solutions must adopt lightweight auto-learning techniques to function properly and energy-efficient mobile phones.

Eyephone Design
One question that we address in this paper is the utility of an inexpensive and ubiquitous sensor, such as the camera, in building HPI applications. We developed flicker detection mechanisms and flicker detection mechanisms based on algorithms originally designed for desktop machines using USB cameras. We demonstrate the limitations of a standard HCI technique when used to perform an HPI application on a mobile device with limited resources, such as the Nokia N810.

The algorithmic design of EyePhone is divided into the following phases of the channeling:

1) an ocular detection phase;

2) an open eye template creation phase;

3) a phase of ocular follow-up;

4) a blink detection phase. In what follows, we discuss each of the phases in turn. Eye detection.

By applying a technique of motion analysis that operates on consecutive frames, this phase consists of finding the outline of the eyes. The pair of eyes is identified by the contours of the left and right eyes. While the original algorithm identifies the eye pair with almost no error when running on a desktop computer with a fixed camera (see the image to the left in Figure 1), we get errors when the algorithm is implemented on the phone due To the quality of the N810 Figure 1: Figure left: example of eye contour pair returned by the original algorithm that runs on a desktop with a USB camera. The two white groups identify the pair of eyes. Figure right: example of the number of contours returned by EyePhone on the Nokia N810.

Smaller points are misinterpreted as eye contours. The inevitable movement of the phone while in the hand of a person (see the image on the right in Figure 1). Based on these experimental observations, we modified the original algorithm: i) reducing the image resolution, which according to the authors reduces the detection error rate of the eye, and ii) adding two more criteria to the original heuristic filtering the false eye Contours In particular, filter all contours so that their width and height in pixels are such that widthmin width anchuramax and heightmin height heightmax.

The widthmin, widthmax, heightmin, and heightmax thresholds, which identify possible sizes for a true eye contour, are determined under various experimental conditions (eg bright, dark, mobile, non-mobile) and with different people. This design approach greatly increases the accuracy of ocular follow-up

Copyright By
Emiliano Miluzzo, Tianyu Wang, Andrew T. Campbell, Computer Science Department, Dartmouth College,Hanover, NH, USA
MobiHeld 2010, August 30, 2010, New Delhi, India.
References
[1] Apple iPhone. http://ww.apple.com/iphone.
[2] Nokia Series. http://europe.nokia.com/nseries/.
[3] HTC. http://ww.htc.com/ww/product.
Reply

#2
i want seminar report on eyephone as its my seminar topic
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

Powered By MyBB, © 2002-2024 iAndrew & Melroy van den Berg.