Human-Centered Computing (HCC) is focused on solving real world problems through the integration of computing, with people, technology, information, policy and at times culture. HCC requires the Data Structures background of Computer Science however, it also includes many other areas such as Human Computer Interaction, Psychology, Human Factors and Industrial Engineering. I am excited about going out into the world to improve the lives of human beings through technology. I am particularly interested in building Human Machine Interfaces for Semi-Autonomous cars with an emphasis on system malfunctions, override issues and driver orientation. Click on this link to gain more information about Human-Centered Computing.
Naturalistic Driving Research
Driver distraction is a major concern that has resulted in numerous accidents and road fatalities. With the rapid advancements in technology, drivers often have more than they can handle in the car that can be viewed as a distraction. As a result of these accidents and handheld cell phone laws, automakers and application developers have focused on developing voice texting alternatives for use in the car. This research is focused on studying a driver in as much of their natural driving state as possible. My research team and I have conducted naturalistic driving experiments in a real car to compare voice-texting alternatives. We are currently analyzing our results for publishing in an academic journal.
My research team and I developed a tool called video verification or vSquared. This tool can be used in different domains, however the motivation for this work stemmed from the concerns regarding voter ID laws across the United States. Many of the voter ID laws could disenfranchise many Americans and this tool was built to help alleviate this issue. Video Verification is very simple and is essentially a video of a person stating their name and address to verify identity claims. Even though Video Verification may seem simple, it very difficult to impersonate and if fraud is attempted, there will be a permanent record of an individual trying to impersonate someone else. Current methods of verification include a voter ID without a photo or some form of photo ID, which are both easily manipulated, but not only that, most people cannot tell the difference when there is some form of manipulation present. We have conducted experiments to compare our tool against current verification methods and it has been able to detect fraud 78% more accurately than a voter ID or photo ID. We are currently in the process of preparing our results for publishing in an academic journal.
Prime III is a secure multimodal electronic voting system that provides Universal Accessibility. Currently it is the worlds most accessible electronic voting system in a single machine. It can by used by anyone in spite of age or ability. Prime III implements Universal Design. By Universal Design, we mean “an approach to the design of all products and environments to be as usable as possible regardless of age, ability or situation. Other terms for Universal Design include Design For All, Inclusive Design, and Barrier-Free Design. “ (Universal Design Education Online). There is no other know electronic voting system that is developed by the Hand conducted research activities, including usability studies with electronic voting systems for the blind, deaf and disabled (primevotingsystem.com). Currently we are running a study with our voting technology to examine how much participants will notice review screen anomalies. Over 50% of participants in other studies did not notice review screen anomalies. Security is of utmost importance in an election which is one of the reasons why we are doing this. If participants are able to recognize anomalies with our technology, we will be even more confident that we are on the right track to creating an even better system.
The term iTech means Interactive Technology Assistant. It started with Dr. Dale-MarieWilson at a point when she was working towards her PhD in Computer Science initially for a VI manual, however, it has been expanded in many ways since then. In this case Dr. Wilson used speech recognition to query information from a database and results were returned. Another area of my current research focuses on using a voice activated application that is very similar to iTech to query information from a driver’s manual. Research have shown that talking and preparing to talk is what places the highest cognitive load on the brain. Additionally some drivers who many be particularly traveling on long trips may want to find information about the different features of their recently purchased vehicle. Some may want to physically look through the manual while driving however, this proves to be very unsafe. Our aim is to provide drivers with the ability to naturally interact with their vehicle using their voice. The Human Centered Computing Lab at Clemson University will shortly start carrying out studies with iTech in a driving simulator and also in a real car to measure certain metrics such as the cognitive load on the brain while a driver ask the car a question. Some of these metrics also include but are not limited to efficiency and user satisfaction.
Voiceing is the 2.2 version of a tool called voiceTEXT which has similar features, however, Voiceing has advanced features. Both VoiceTEXT 2.1 and Voiceing 2.2 were developed by the HCC lab which allows for hands-free-eyes-free communication. Cell phone use while driving has become a very prevalent issue as it has resulted in numerous deaths. Enforcing cell phone laws is often hard for law enforcement officials due to the fact that a phone today is no longer just a cell phone and can contain various personal, private and confidential information. The law does not stop persons from texting while driving, which has resulted in the Human Centered Computing providing a safer solution. Although transcription is done in Voiceing, the technology also sends a voice message just like a text message to the recipient and the recipient will be able to reply and compose a message using their voice and without ever looking at their cell phone. Voiceing has three modes which are voice, text and email. With this, a recipient is able to receive a voice message, a text message and also an email message if they wish. They are very pertinent reasons to having these three modes with Voiceing. I will only elaborate on the main reasons; a voice message delivery mode is available to solve the issue of using a handheld device while driving. When a user sets their mode to voice only, he or she will only receive the voice message of what the sender said and will be able to listen and respond hands-free-eyes-free. The text mode only allows for the text delivery option, which will transcribe the sender’s voice message and send it to the recipient as a text message. The text mode is optimal when the recipient is in an extremely loud environment such as a football game or in a quite environment such as a meeting, in which case a text message will suffice. Lastly the email mode will include a wave file attachment where the recipient can listen to the voice message, and it will also contain a transcribed text of the message that was sent. A user has the option of enabling one, two or all three modes based on their preference. Voiceing is also usable by persons with many different disabilities. Some persons with disability such as the blind or persons without arms who have never had the text message experience can gain this through Voiceing. Older persons and even younger persons who may be dyslexic or have nerves problems will be able to easily use Voiceing to send or receive a Voice message. The HCC lab has already conducted studies to measure user satisfaction with VoiceTEXT. We will run additionally studies shortly to measure additional metrics with our new and improved Voiceing technology. Click on this link to go to the Voiceing webpage.