This paper examines the dual-use challenges of foundation models and the consequent risks they pose for international security. As artificial intelligence (AI) models are increasingly tested and deployed across both civilian and military sectors, distinguishing between these uses becomes more complex, potentially leading to misunderstandings and unintended escalations among states. The broad capabilities of foundation models lower the cost of repurposing civilian models for military uses, making it difficult to discern another state's intentions behind developing and deploying these models. As military capabilities are increasingly augmented by AI, this discernment is crucial in evaluating the extent to which a state poses a military threat. Consequently, the ability to distinguish between military and civilian applications of these models is key to averting potential military escalations. T
we review the pervasiveness of cyber threats and the roles of both attackers and cyber users (i.e. the targets of the attackers); the lack of awareness of cyber-threats by users; the complexity of the new cyber environment, including cyber risks; engineering approaches and tools to mitigate cyber threats; and current research to identify proactive steps that users and groups can take to reduce cyber-threats. In addition, we review the research needed on the psychology of users that poses risks to users from cyber-attacks. For the latter, we review the available theory at the individual and group levels that may help individual users, groups and organizations take actions against cyber threats. We end with future research needs and conclusions.
Artificial intelligence is opening up new avenues for value generation in enterprises, industries, communities, and society as a whole. Technology has been researched to be relevant in many aspects of the world. This factor has made it to be included mainly in different businesses and industries. The applications of AI are endless to discuss. The research below examines the applications of artificial intelligence (AI) in cybersecurity. Cybersecurity has also been a growing concept in the technological industry. Many companies have included information technology in their businesses. This factor has required companies and organizations to demand more security measures. The attempt to protect the available data and information has resulted in the growth of cybersecurity, and AI has been seen to influence cybersecurity heavily on a large scale. This factor has made machine learning to be significantly induced in recent technologies supporting cybersecurity. The research paper performs a literature review and examines the overall impacts of artificial intelligence on cybersecurity
Exaggerated fears about the paralysis of digital infrastructure and the loss of competitive advantage contribute to a spiral of mistrust in U.S.-China relations. In every category of putative Chinese cyber threat, there are also considerable Chinese vulnerabilities and Western advantages. China has inadvertently degraded the economic efficiency of its networks and exposed them to foreign infiltration by prioritizing political information control over technical cyber defense. Although China also actively infiltrates foreign targets, its ability to absorb stolen data is questionable, especially at the most competitive end of the value chain, where the United States dominates. Similarly, China's military cyber capacity cannot live up to its aggressive doctrinal aspirations, even as its efforts to guide national information technology development create vulnerabilities that more experienced U.S. cyber operators can attack.
Charl van Der Walt of SecureData SensePost examines some of the major stories that have hit the headlines recently and the implications for the future. And he identifies some key trends, including the role that government policy will play in the security of nations and individual citizens.
Over the past decades, industries and governments have progressively been relying upon space data-centric and data-dependant systems. This led to the emergence of malicious activities, also known as cyber-threats, targeting such systems. To counter these threats, new technologies such as Artificial Intelligence (AI) have been implemented and deployed. Today, AI is highly capable of delivering fast, precise, and reliable command-and-control decision-making, as well as providing reliable vulnerability analysis using well-proven cutting-edge techniques, at least when applied to terrestrial applications. In fact, this might not yet be the case when used for space applications. AI can also play a transformative and important role in the future of space cybersecurity, and it poses questions on what to expect in the near-term future. Challenges and opportunities deriving from the adoption of AI-based solutions to achieve cybersecurity and later cyber defence objectives in both civil and military operations require rethinking of a new framework and new ethical requirements. In fact, most of these technologies are not designed to be used or to overcome challenges in space. Because of the highly contested and congested environment, as well as the highly interdisciplinary nature of threats to AI and Machine Learning (ML) technologies, including cybersecurity issues, a solid and open understanding of the technology itself is required, as well as an understanding of its multidimensional uses and approaches. This includes the definition of legal and technical frameworks, ethical dimensions and other concerns such as mission safety, national security, and technology development for future uses. The continuous endeavours to create a framework and regulate interdependent uses of combined technologies such as AI and cybersecurity to counter “new” threats require the investigation and development of “living concepts” to determine in advance the vulnerabilities of networks and AI. This paper defines a cybersecurity risk and vulnerability taxonomy to enable the future application of AI in the space security field. Moreover, it assesses to what extent a network digital twins’ simulation can still protect networks against relentless cyber-attacks in space against users and ground segments. Both concepts are applied to the case study of Earth Observation (EO) operations, which allows for conclusions to be drawn based on the business impact (reputational, environmental, and social) of a cyber malicious activity. Since AI technologies are developing on a daily basis, a regulatory framework is proposed using ethical and technical approaches for this technology and its use in space. © 2023 International Association for the Advancement of Space Safety
Theoretical management thought posits that the challenge of addressing cyber risks has instigated a reevaluation of business resilience, particularly as organizations increasingly adopt digital transformation strategies. To empirically investigate this proposition, this paper delves into the realm of cybersecurity measures and their influence on business resilience. Employing a cross-sectional research design and a quantitative methodology involving 255 respondents, encompassing entrepreneurial SME managers, the study collected the essential data for its investigation. Structural equation modeling techniques were leveraged to scrutinize both the measurement model and the structural model.
A multitude of studies have suggested potential factors that influence internet security awareness (ISA). Some, for example, used GDP and nationality to explain different ISA levels in other countries but yielded inconsistent results. This study proposed an extended knowledge-attitude-behaviour (KAB) model, which postulates an influence of the education level of society at large is a moderator to the relationship between knowledge and attitude. Using exposure to a full-time working environment as a proxy for the influence, it was hypothesized that significant differences would be found in the attitude and behaviour dimensions across groups with different conditions of exposure and that exposure to full-time work plays a moderating role in KAB. To test the hypotheses, a large-scale survey adopting the Human Aspects of Information Security Questionnaire (HAIS-Q) was conducted with three groups of participants, namely 852 Year 1–3 students, 325 final-year students (age = 18–25) and 475 full-time employees (age = 18–50) in two cities of China.
As individuals, businesses and governments increasingly rely on digital devices and IT, cyber attacks have increased in number, scale and impact and the attack surface for cyber attacks is expanding. First, this paper lists some of the characteristics of the cybersecurity and AI landscape that are relevant to policy and governance. Second, the paper reviews the ways that AI may affect cybersecurity: through vulnerabilities in AI models and by enabling cyber offence and defense. Third, it surveys current governance and policy initiatives at the level of international and multilateral institutions, nation-states, industry and the computer science and engineering communities. Finally, it explores open questions and recommendations relevant to key stakeholders including the public, computing community and policymakers. Some key issues include the extent to which international law applies to cyberspace, how AI will alter the offencedefense balance, boundaries for cyber operations, and how to incentivize vulnerability disclosure.
Cyber warfare and the advent of computer network operations have forced us to look again at the concept of the military objective. The definition set out in Article 52(2) of Additional Protocol I – that an object must by its nature, location, purpose or use, make an effective contribution to military action – is accepted as customary international law; its application in the cyber context, however, raises a number of issues which are examined in this article.