Wiki Article

Draft:Psychology of Cybersecurity

Nguồn dữ liệu từ Wikipedia, hiển thị bởi DefZone.Net

  • Comment: Do not use ChatGPT to write Wikipedia articles. qcne (talk) 19:08, 7 December 2025 (UTC)COMMENT: i'havent use any LLM this is only made by me!




The psychology of cybersecurity (often intersecting with usable security and cyberpsychology) is an interdisciplinary field studying how human behavior, cognitive biases, and social dynamics influence information security. While traditional cybersecurity focuses on hardware and software vulnerabilities, this discipline addresses the "human factor," which is frequently exploited in cyberattacks.[1] It draws primarily from cognitive psychology, behavioral economics, and human–computer interaction.

History and evolution

[edit]

The challenge of human behavior in computing was noted as early as the 1960s with multi-user mainframes like the Compatible Time-Sharing System (CTSS). In 1966, a software error on CTSS caused the system's master password file to be displayed to every user upon login—one of the earliest documented security incidents attributable to a combination of system design and human factors.[2] Even before this incident, system administrator Fernando Corbató had observed that users found passwords fundamentally burdensome and routinely circumvented them. Decades later, Corbató acknowledged that the password system had become unmanageable at scale.[3]

These behaviors gained broader significance in the 1990s as the Internet became widely accessible. High-profile incidents involving figures like Kevin Mitnick demonstrated that exploiting human trust through social engineering (such as pretexting over the phone) was often more efficient than attempting technical exploits.[4]

Cognitive and behavioral factors

[edit]

Much of the psychology of cybersecurity focuses on decision-making under stress or uncertainty. Researchers apply frameworks like Daniel Kahneman's dual process theory to explain why individuals fall for phishing or business email compromise (BEC). Threat actors typically design malicious communications to trigger fast, emotional "System 1" thinking—using urgency, authority, or panic—which prompts users to click a link or wire funds before their analytical "System 2" can assess the situation's legitimacy.[5]

The effectiveness of phishing at scale has been documented in industry research. The 2016 Verizon Data Breach Investigations Report found that in controlled simulations, approximately 30% of phishing emails were opened by their targets, and 12% of recipients proceeded to click the malicious attachment or link. The median time for the first user in an organization to open a phishing email was 1 minute and 40 seconds.[6]

Several well-documented psychological phenomena actively influence daily security practices:

  • Cognitive biases: The optimism bias often leads users to believe they are unlikely to be targeted by cybercriminals, resulting in lax password practices or delayed software updates. The availability heuristic can cause individuals to focus on highly publicized, sophisticated threats while ignoring common, statistically probable risks like credential reuse.[7]
  • Social influence: Attackers leverage established principles of persuasion, such as those categorized by Robert Cialdini. Impersonating a CEO leverages the psychological trigger of authority, while fake tech support scams often use reciprocity (offering to fix a small problem before asking for network credentials).[8]

Security fatigue and organizational dynamics

[edit]

In today's digital workplace, cybersecurity is a cornerstone of organizational stability, underpinning efforts to protect sensitive data and ensure compliance with increasingly stringent regulatory frameworks. However, as cybersecurity measures evolve, employees tasked with implementing and maintaining these systems face growing challenges. This has led to cybersecurity fatigue, a state of mental and emotional exhaustion arising from repeated exposure to security demands.

Alert fatigue

[edit]

A primary example is alert fatigue, heavily documented among both end-users and security operations center (SOC) analysts. Continuous exposure to browser warnings or antivirus pop-ups, particularly those that are false positives, conditions users to habituate to the alerts and dismiss them automatically without reading the content.[9] The scale of this problem is significant in enterprise settings: industry surveys have estimated that SOC teams in large organizations receive thousands of alerts daily, a substantial proportion of which turn out to be false positives, meaning that genuinely malicious indicators are frequently buried in noise.[10]

Password fatigue

[edit]

Similarly, Password fatigue is the feeling experienced by many people who are required to remember an excessive number of passwords as part of their daily routine, such as to log in to a computer at work. Users cope with the memory burden by making predictable, iterative changes to their passwords (such as updating "Password01!" to "Password02!"), which actually decreases overall network security.[11]

Compliance budget

[edit]

Within corporate environments, these behavioral patterns create a "compliance budget." Beautement, Sasse, and Wonham introduced the concept through field research at a large European organization, where they observed that employees routinely emailed sensitive documents to personal accounts because the approved secure file transfer system required a cumbersome multi-step process. The researchers argued that security policies carry a hidden cost that competes directly with productivity: each additional security measure draws from a finite reserve of employee willingness to comply. Once this budget is exhausted, employees default to insecure workarounds regardless of training or awareness campaigns.[12] This phenomenon, in which employees adopt unauthorized tools or cloud services to circumvent perceived obstacles, is often referred to as Shadow IT.

See also

[edit]

References

[edit]
  1. ^ Anderson, Ross (2020). Security Engineering: A Guide to Building Dependable Distributed Systems (3rd ed.). Wiley. ISBN 978-1119642787.
  2. ^ Corbató, Fernando J. (1991). "On Building Systems That Will Fail". Communications of the ACM. 34 (9). doi:10.1145/114669.114686.
  3. ^ McMillan, Robert (2012-01-27). "The World's First Computer Password? It Was Useless Too". Wired.
  4. ^ Mitnick, Kevin; Simon, William (2002). The Art of Deception: Controlling the Human Element of Security. Wiley. ISBN 978-0471237129.
  5. ^ Vishwanath, Arun; Herath, Tejaswini; Chen, Rui; Wang, Jingguo; Rao, H.R. (2011). "Why do people get phished? Testing individual differences in phishing vulnerability within an integrated, information processing model". Decision Support Systems. 51 (3). doi:10.1016/j.dss.2011.03.002.
  6. ^ 2016 Data Breach Investigations Report (Report). Verizon Enterprise Solutions. 2016.
  7. ^ Pattinson, Malcolm; Jerram, Chris; Parsons, Kathryn; McCormac, Agata; Marcus, Butavicius (2012). Why do some people manage their information security so well?. Information Security South Africa (ISSA). doi:10.1109/ISSA.2012.6320444.
  8. ^ Hadnagy, Christopher (2010). Social Engineering: The Art of Human Hacking. Wiley. ISBN 978-0470639535.
  9. ^ Akhawe, Devdatta; Felt, Adrienne Porter (2013). "Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness". 22nd USENIX Security Symposium.
  10. ^ Cost of a Data Breach Report (Report). Ponemon Institute / IBM Security. 2022.
  11. ^ Florencio, Dinei; Herley, Cormac (2007). "A Large-Scale Study of Web Password Habits". Proceedings of the 16th International Conference on World Wide Web (WWW). doi:10.1145/1242572.1242661.
  12. ^ Beautement, Adam; Sasse, M. Angela; Wonham, Mike (2008). "The compliance budget: managing security behaviour in organisations". Proceedings of the 2008 New Security Paradigms Workshop. doi:10.1145/1532035.1532042.

Further reading

[edit]
  • Anderson, Ross (2020). Security Engineering: A Guide to Building Dependable Distributed Systems (3rd ed.). Wiley. ISBN 978-1119642787.
  • Cranor, Lorrie Faith; Garfinkel, Simson (2005). Security and Usability: Designing Secure Systems that People Can Use. O'Reilly Media. ISBN 978-0596008277.
  • Schneier, Bruce (2015). Secrets and Lies: Digital Security in a Networked World. Wiley. ISBN 978-1119092438.
[edit]