CISOs say AI & machine learning pose the most significant cyber risks
Security Magazine, June 23, 2023
The role of the chief information security officer (CISO) is undergoing a significant transformation as organizations face greater technological needs and risks. According to the 2023 Global Chief Information Security Officer (CISO) Survey by Heidrick & Struggles, the importance of the CISO role continues to grow due to the prevalence of digital technologies, particularly artificial intelligence (AI), and the rising concerns about cyberattacks such as ransomware. As organizations increasingly rely on AI, 46% of CISOs cited artificial intelligence and machine learning as the most significant organizational risk. This is followed by geopolitical risks at 33% and cyberattacks at 19%. To address these challenges, organizations should prioritize succession plans and retention strategies, as well as invest in leadership and development to enhance team capabilities.
While the role of the CISO is expanding in importance, many organizations are ill-prepared for the long run. The survey reveals that 41% of respondents admitted their company lacks a succession plan for the CISO role. However, more than half of those without a plan are in the process of developing one. Additionally, the survey highlights the discrepancy between respondents’ perception of their corporate boards’ knowledge and expertise in effectively responding to cybersecurity presentations and the actual representation of CISOs on those boards. Currently, only 30% of CISOs sit on a corporate board, marking an increase from the previous year’s 14%. This disparity indicates the need for organizations to involve CISOs in strategic decision-making and leverage their expertise in cybersecurity.
To ensure the well-being and effectiveness of CISOs, it is crucial to address the stress and burnout concerns they face. The survey reveals that stress related to their roles is the most significant personal risk for 71% of respondents, while burnout is identified by 54%. These concerns have increased from the previous year, emphasizing the need for organizations to prioritize the mental health and work-life balance of their CISOs. Succession planning, retention strategies, and investment in leadership and development can help mitigate the pressure and workload faced by CISOs, ensuring their longevity and effectiveness in safeguarding organizations against cyber threats.
How Can We Trust Artificial Intelligence in the Car?
ZF
The automotive industry has long embraced the power of artificial intelligence (AI) to drive advancements in safety and efficiency. AI’s role in automated driving is pivotal, with numerous use cases that demonstrate its value. One such use case involves leveraging AI for perception through vehicle sensors. By harnessing AI capabilities, cameras can accurately detect and identify pedestrians, cyclists, and other road users in diverse situations. Moreover, AI plays a crucial role in motion planning, enabling vehicles to navigate through complex environments by dynamically adjusting their longitudinal, transverse, and vertical dynamics.
Dr. Georg Schneider, an AI expert at ZF, emphasizes that high-level or fully automated driving would not be viable without the integration of AI. The challenges posed by automated driving necessitate the advanced capabilities offered by AI. Complex scenarios, such as changing lanes on highways, require intelligent decision-making. AI enables virtual drivers to assess different options, such as choosing an available gap or decelerating to follow another vehicle. Without AI, effectively addressing these intricate scenarios would be a significant hurdle.
While AI brings numerous benefits, it also introduces certain risks. Dr. Arndt von Twickel, head of the “Cybersecurity for Intelligent Transportation Systems and Industry 4.0” unit at the Federal Office for Information Security (BSI), highlights the unique nature of AI systems. Unlike traditional computer systems, AI systems rely on data rather than predefined programming rules. While this data-driven approach offers great opportunities for training AI to handle complex traffic situations, errors in the training data can have cascading effects. Identifying and rectifying such errors is challenging, as there are no straightforward testing mechanisms. Thorough and comprehensive testing, along with the development of new methods, is necessary to ensure the reliability and safety of AI systems.
The Far-Reaching Risks of the Emerging Framework for AI Deployment With Jim Dempsey
Red Clover Advisors, July 6, 2023
Ensuring the security and safety of artificial intelligence (AI) is paramount in today’s rapidly evolving technological landscape. With the integration of AI into various aspects of our lives, it becomes crucial to address the potential risks and vulnerabilities associated with this technology. Cybersecurity expert Jim Dempsey emphasizes the need for companies to adopt privacy and security best practices to protect against malicious attacks and data breaches.
In a recent episode of the She Said Privacy, He Said Security Podcast, Jim Dempsey, the Senior Policy Advisor to the Stanford Program on Geopolitics, Technology, and Governance, discussed the risks of AI deployment and the need for regulations. He highlighted the irresponsible approach of some AI developers in releasing products without addressing known vulnerabilities. This has led to a market frenzy, with companies rushing to adopt AI without adequate consideration for privacy and security concerns.
To mitigate these risks, Jim Dempsey recommends several steps for companies embracing AI. First, organizations should conduct a thorough risk analysis and treat AI-related risks as supply chain risks. They should establish clear corporate policies for AI usage and ensure transparency from AI developers regarding training data and methods. Additionally, organizations must carefully review data flows and the terms of use to safeguard sensitive information and maintain the integrity, confidentiality, and availability of their data.
By adopting privacy and security best practices and taking a proactive approach to address AI-related risks, companies can ensure the security and trustworthiness of their AI systems. This will help protect against potential threats, maintain data privacy, and foster responsible AI deployment in an increasingly interconnected world.
A New Frontier of AI Innovation: A View into the Future of the AI Security Market
Night Dragon, July 6, 2023
The widespread integration of artificial intelligence (AI) has introduced a new paradigm in cybersecurity, bringing forth unprecedented threats and the need for robust safeguards. As enterprises face a range of sophisticated attacks, questions arise about effectively managing risks and ensuring responsible AI usage. This pressing challenge has spurred significant innovation and the commercialization of security tools specifically designed to address issues like data privacy, sensitive data leakage, and model robustness.
In the current landscape, where machine learning models are becoming increasingly prevalent, security becomes a crucial aspect. Until regulatory frameworks and AI governance mature, enterprises are left to navigate on their own, proactively developing frameworks to secure their ML models. The surge in demand for ML model security has given rise to a new generation of vendors, marking the dawn of a new era in cybersecurity and machine learning.
The machine learning development lifecycle plays a pivotal role in ensuring the security of ML models. With the market ripe for innovation, we can anticipate a rise in ML-specific attacks propelled by the escalating adoption of these tools in commercial environments. Startups and applications focused on protecting AI models have identified a significant market opportunity, signaling the need to address security risks associated with AI and generative AI systems.
Governments around the world are also recognizing the urgency of regulating AI and are actively proposing new regulations, guidelines, and research frameworks to enhance AI security. Initiatives such as the FDA’s action plan, the Algorithmic Accountability Act, and the EU AI Act aim to establish regulatory frameworks, bolster governance, and enforce existing laws to ensure the safety and reliability of AI systems.
The convergence of AI and cybersecurity demands heightened vigilance, innovative solutions, and robust regulatory frameworks to protect against evolving threats and mitigate potential harm. By proactively securing ML models and adopting ML-specific security tools, enterprises can safeguard their business interests and investments. Collaboration and support for founders developing cutting-edge security solutions will be instrumental in defending against adversaries leveraging attacks on ML models. Governments and enterprises alike must prioritize AI security, recognizing it as a strategic imperative in today’s AI-driven world.
Subscribe for updates
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.