Using Personalized Warning Interfaces to Protect Against Phishing Website Attacks

Computer Warning

Phishing schemes are the most prevalent of internet crimes, with more than 500 million phishing attacks reported in 2022, according to Forbes Advisor.

Phishing website attacks not only involve financial and personal risks for individuals, but can also result in detrimental compromises to corporate information and systems, government security agencies, and other organizations. Lubar College of Business researchers in Information Technology Management have been studying phishing website attacks and ways to reduce the threats.

Fatemeh (Mariam) Zahedi, UWM Distinguished Professor Emerita, Yan Chen, Ryder Eminent Scholar Chair at Florida International University (UWM PhD ’12), and Huimin Zhao, Professor and Roger L. Fitzsimonds Distinguished Scholar, have a paper forthcoming in the top journal in Information Systems that details how to increase user protection against phishing website attacks.

The research team notes that although most browsers have phishing detection tools to warn users against phishing website attacks, most people are not aware of such tools working behind the scenes. When they see a warning, many people do not trust it and fall victim to their own security behaviors. However, users cannot be blamed for such behaviors. They rarely interact and build a relationship with a detection tool that is quietly running at the backend. An unfamiliar warning message on a rarely seen interface hardly persuades users to trust in its authenticity.

So, the research team has hypothesized that if the warning interface could be personalized for users by allowing them to design their own warning interface, their security behaviors could be altered to increase their compliance and self-protection against phishing website attacks. The reason is that designing their own warning interface causes a close relationship with the detection tool and its warning, thus increasing their trust in the tool and compliance with the warning.

In finding a way to personalize the warnings, the research team realized that users need to have access to a full menu of warning interface elements and should be prompted at the right time to choose what they want to include in the warning interface. So, the team designed a two-phase study that first built a knowledgebase with users’ choices of warning interface elements, called an ontology of interface elements.  (An ontology is a formal representation of knowledge and concepts in a domain.) Then they used the ontology to create a prototype for a personalized warning interface.

In phase one, the research team built the ontology by collecting, categorizing, and structuring the interface elements from numerous sources and available published papers. This ontology contains the structures, categories, and all elements within each category. The team consulted with experts and performed three rounds of survey from various user populations to identify what categories of elements are most important to users.

In phase two, the research team built a prototype of a software tool that simulated a detection tool with different levels of reliability. The detection tool prompted users to choose the elements in each category and built a personalized warning interface for each user. The team used this prototype in extensive controlled lab experiments to test users’ behaviors in accessing websites that might involve phishing website attacks. The lab experiments produced data that allowed the team to show objectively that indeed personalizing the warning interface changes people’s security behaviors and increases their protection against phishing website attacks.

Combining the ontology of interface elements and identifying user preferences for personalization of warning interfaces emerged as the key to personalizing the interface. Personalization enhances the relationship of the detection tool with users by familiarizing them with warning elements, increasing their interest in such elements, and involving them in individualizing their warning design.

The researchers relied on multiple theories in this project. For example, they used the ontology approach, which is one of the foundational methods in knowledgebase development in artificial intelligence. They formulated a model for the assessment of their prototype using evolutionary stimulus-organism-response as well as trust theories. From the neurobiology of human vision, as the researchers argued, users having their own preferred warning interfaces may increase the processing speed of warning messages in their brains.

The research team reports interesting, detailed findings about the overall preferences for categories and elements in many appendices of the paper. Examples of the categories of elements that emerged as most preferred by users include consequence (what may happen if a user visits a phishing site), signal word, hazard information, icon, and color schemes. The team also reports the overall preferences for elements within each category. For example, “warning” for signal word, “unsafe” for hazard information, and stop sign for icon indicated preference for familiar elements that the brain can process quicky. This revealed that moderate high-intensity signal words and icons can produce proper arousal when facing a threat. Users also overwhelmingly preferred to have the option to proceed to visiting the website as opposed to being blocked from visiting it.

The research team reports that some categories of elements play a more important role in increasing users’ trust in the warning and a sense of having a personalized warning interface. Five categories showed up to have important roles: consequence statement about the threat, signal word, text being bold, color scheme, and icon.

The research team also shows how to update the ontology as users reveal their preferences through the personalization process and as new elements emerge. The team suggests that detection tool developers can modify the ontology of warning elements for various user subpopulations, such as children, teenagers, and senior citizens, as well as for different organization types. For instance, personalizing warning interfaces using comics and animations could attract children’s attention to security warnings.

At a broader level, the researchers say that policymakers and agencies with an interest in promoting online security need to be aware of the pervasive influence of this methodology in increasing warning compliance and encourage the utilization of the ontology-based intelligent interface personalization.  

This research is forthcoming in Information Systems Research, “Ontology-Based Intelligent Interface Personalization for Protection Against Phishing Attacks,” Fatemeh (Mariam) Zahedi, Yan Chen, and Huimin Zhao.

Research@Lubar Faculty scholarship in the Lubar College of Business spans the business fields and beyond through both theoretical and applied research that is published in leading journals.  Here are some of our faculty’s most recent publications:
Depicting Risk Profile Over Time: A Novel Multiperiod Loan Default Prediction Approach
MIS Quarterly
Authors: Zhao Wang, Culqing Jiang, and Huimin Zhao
Nonprofit Organizations’ Financial Obligations and the Paycheck Protection Program
Management Science
Authors: Daniel G. Neely, Gregory D. Saxton, and Paul A. Wong
Multiperiod Channel Coordination in Franchise Networks: The Necessity of Internal Inventory Trading and Franchiser Involvement
Production and Operations Management
Authors: Xiaohang Yue, Liangbin Yang, and Rong Li
Calling Oneself and Others In: Brokering Identities in Diversity Training
Academy of Management Journal
Authors: Keimei Sugiyama, Jamie J. Ladge, and Diana Bilimoria
A Theory-Driven Deep Learning Model for Voice Chat-Based Customer Response Prediction
Information Systems Research
Authors: Gang Chen, Shuaiyong Xiao, Chenghong Zhang, and Huimin Zhao
Click here to see more faculty research