NOTE TO EDITORS: October is National Cybersecurity Awareness Month.

Newswise — HUNTSVILLE, Ala. (Oct. 15, 2014) – In ground-breaking research, two University of Alabama in Huntsville (UAH) professors are figuring out the most effective ways we’re influenced to give away personal information online and what warnings would be most effective to get us to stop being so generous with people who profit from our information or use it in other ways that are bad for us.

Dr. Sandra Carpenter, professor of psychology, and Dr. Feng (Frank) Zhu, assistant professor of computer science, are currently working under a $464,000 National Science Foundation (NSF) grant, and as far as the NSF can tell, the UAH professors are among the first asking these types of questions.

“We’re trying to have people be more careful with the personal information they divulge online,” says Dr. Carpenter. “The problem is what is it you can say to them that will be an effective warning?”

The scientists routinely call such inquiries “attacks,” because while they seem friendly and innocuous on our screen – and are designed to appear so ¬– we don’t really know who is collecting the information or for what it is intended. Gatherers often use one or more social influencing strategies that originate in marketing or cultural contexts, like providing us a reward, to increase their success rates.

It would be nice if something could effectively clue us in to all this by getting our attention before our fingers go tappity-tap. Warning apps or plug-ins are possible outcomes of the research, but they’ll only work if they use effective methods that grab our attention.

Dr. Zhu and Dr. Carpenter are using eye trackers to detect where a user’s eyes are on a screen and how long they stay at any point, and they have looked at the research on which warnings work in industry for toxic chemicals and other dangers. They are probing with the Communication Human Information Processing (CHIP) model to discover which authoritative warning sources are more credible with users.

“CHIP indicates the stream of processes a person goes through in order to accept a warning,” Dr. Carpenter says. They include assessing the strength of the authority issuing it, comprehending it, remembering it, changing attitudes because of it and being motivated to modify behavior.

“We’re looking at all those different stages and the effectiveness of warnings,” Dr. Carpenter says.

The scientists have performed experiments with disclosure behaviors when test subjects are not under attack, when they are under attack and when they are under attack but have been effectively warned. (Sandra Carpenter, Feng Zhu, Swapna Kolimi, “Reducing Online Identity Disclosure Using Warnings Corresponding,” Applied Ergonomics, 45(5), 2014)

“When they are under attack with an effective warning, we find that people disclose at about the rate of those not being attacked,” says Dr. Carpenter. “We are currently trying to see which warning words work best and we are testing now to see which source is more credible and effective for the warning.”

PEOPLE TARGET

They’ve been asking these kinds of questions awhile, determining the scope and parameters of disclosure since 2008. That research identified the types of information people are more cautious about divulging and what private information inquiry techniques are most effective.

Dr. Carpenter recalls the day psychology met computer science and the seed for collaboration was planted.

“What’s that quote? Amateurs attack software: professionals attack people?” Dr. Carpenter says. “Well, Frank came into my office one day and said that quote to me, and then he said, ‘So I have to know about psychology.’”

Dr. Zhu digs a book from his office shelf to provide the exact quote from computer privacy and security expert Bruce Schneier: “Only amateurs attack machines; professionals target people.”

At first, the researchers experimented to find out how concerned we are about the whole privacy issue and then organized a chart of relative concerns.

“We grouped what elements people think are important and what are not important,” says Dr. Zhu. “What privacy concerns do you have and what safety measures have you taken? We had to design an experimental environment that would reflect their true attitudes.”

“We asked people questions like how important to them it is to keep a hobby private, and how important it is to keep your Social Security number private,” Dr. Carpenter says. (Feng Zhu, Sandra Carpenter, and Ajinkya Kulkarni, “Understanding Identity Exposure in Pervasive Computing Environments,” Pervasive and Mobile Computing, Vol. 8, 2012, and Feng Zhu, Sandra Carpenter, Ajinkya Kulkarni, Chockalingam Chidambaram, Shruti Pathak, “Understanding and Minimizing Identity Exposure in Ubiquitous Computing Environments,” Proceedings of the 2009 International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services)

And they found out very early in the research just how amazingly open people are online. “One of the biggest problems is that, even when people know we are trying to collect this information on them, they are willing to give it out freely,” Dr. Carpenter says.

Initially, “we didn’t think any kind of a warning that we were collecting the information was necessary,” Dr. Carpenter says. Right away, they discovered in experiments that just about everybody tested was willing to share all under any conditions if simply asked.

“We looked to see under reciprocity conditions what information people would give us and we found out that even in the control group, they gave us all the information requested,” says Dr. Carpenter.

So next, they informed test subjects that what they supplied was being collected for use, even though during the experiments none of the data provided by subjects left the laboratory or was permanently collected.

“We told them we were collecting it for a third party,” Dr. Carpenter says.

Within that framework, they began to explore the success rate of attacks launched by asking for information without using social influences (reciprocity) to attacks made using social influences. (Feng Zhu, Sandra Carpenter, Ajinkya Kulkarni, Swapna Kolimi, "Reciprocity Attacks," Symposium On Usable Privacy and Security, Pittsburgh, PA, 2011)

“Using a social influencer with a third-party scenario, we found that people are three to five times more likely to disclose their private information,” says Dr. Carpenter. “We’re doing this research to try to figure out how we can best reduce that disclosure behavior.”

Journal Link: Pervasive and Mobile Computing, Vol. 8, 2012 Meeting Link: Symposium On Usable Privacy and Security, Pittsburgh, PA, 2011