The growth of AI is driving an increased focus on security and more use cases to happen at the Edge, according to new research from PSA Certified. But with two-thirds (68%) of technology decision-makers raising concerns that rapid advances in AI risk outpacing the industry’s ability to secure products, devices and services, the acceleration in AI needs to be matched by the same acceleration in security investment and best practice to ensure trusted AI deployment.
A major factor impacting the need for greater AI security is Edge technology. With the ability to process, analyse and store data at the Edge of the network, or on the device itself, Edge devices have efficiency, security and privacy advantages over a centralised cloud-based location. This could be why 85% of device manufacturers (OEMs), design manufacturers (ODMs), SIPs, software vendors and other technology decision-makers believe that security concerns will drive more AI use cases to happen at the Edge. But in this push for added efficiency, the security of Edge devices has become even more crucial and organisations will need to double down on securing and protecting their devices and AI models in order to meet the demands of deploying AI at scale.
Addressing the AI security lag
Security matters across the supply chain, whether you’re a deployer of services, a device vendor or a consumer of those services. Indeed, the survey of 1,260 global technology decision-makers found that security has increased as a priority in the last 12 months for three quarters (73%) of respondents, with 69% now placing more impetus on security as a result of AI.
However, despite AI’s promise to catalyse the importance being placed on security, there is an AI-security lag that needs to be closed if its full potential is to be realised.
Only half (50%) of those surveyed believe they are currently investing enough in security and a significant proportion are neglecting to prioritise important security foundations, like security certification, that underpin best practice. Just over half (54%) are currently using externally validated security certifications, independent third-party testing/evaluation on products (48%) or threat analysis/threat modelling (51%) as a means to improve the security robustness of their products and services. These easy-to-implement security fundamentals should be foundational as organisations seek to build consumer trust in AI-driven services.
David Maidment, Senior Director, Market Strategy, at Arm (a PSA Certified co-founder): “There is an important interconnect between AI and security: one doesn’t scale without the other. While AI is a huge opportunity, its proliferation also offers that same opportunity to bad actors. It’s more imperative than ever that those in the connected device ecosystem don’t skip best-practice security in the hunt for AI features. The entire value chain needs to take collective responsibility and ensure that consumer trust in AI-driven services is maintained. The good news is that the industry recognises the need to prepare, and the criticality of prioritising security investment to future-proof systems against new attack methods and rising security threats linked to rapid adoption of Edge AI.”
AI and security: net positive but both must scale together
With four in five (80%) of respondents claiming security built into products is a driver of the bottom line, there’s a commercial as well as a reputational benefit to continued security investment. The same proportion (80%) also agree that compliance with security regulation is now a top priority, up by 6% from those listing it as a top three priority in 2023 (74%).
With Edge AI booming alongside an exponential increase in AI inference, the result is an unprecedented amount of personal data being processed on the billions of individual endpoint devices, with each one needing to be secured. To secure Edge devices and maintain compliance with emerging cybersecurity regulation, stakeholders in the connected device ecosystem must play their part in creating a secure Edge AI life cycle that includes the secure deployment of the device and the secure management of the trusted AI models that are deployed at the Edge.
Despite some concerns that rapid advances in AI are outpacing the industry’s ability to secure products, devices and services (68%), organisations broadly feel poised to capitalise on the AI opportunity and are buoyant about the ability for security to keep pace. 67% believe their organisation is well-equipped to manage the potential security risks associated with an upsurge in AI. More decision makers are also placing importance on increasing the security of their products and services (46%) than increasing the AI readiness (39%) of their products and services, recognising the importance of scaling security and AI in step.
But with a majority of respondents (78%) also agreeing they need to do more to prepare for AI, and concerns around security risks remaining prevalent, security must remain a central pillar of technology strategy. Improving and scaling security in an era of interoperability and Edge AI requires established standards, certification and trusted hardware all businesses can rely on. By embedding security-by-design, organisations can guarantee a benchmark of best practice that will help to protect them against risk both today and in the future.
Original article source:
https://www.electronicspecifier.com/products/artificial-intelligence/the-promise-of-ai-relies-on-scaling-security-as-edge-ai-booms
FAQ
- What is Edge AI?
– Answer: Edge AI refers to the deployment of artificial intelligence algorithms on edge devices, such as sensors, cameras, and other IoT devices, rather than in centralized data centers or cloud environments. This allows for real-time processing and analysis of data closer to the source.
- Why is scaling security important for Edge AI?
– Answer: As Edge AI becomes more prevalent, ensuring the security of these edge devices is crucial because they often handle sensitive data and are vulnerable to various security threats. Scaling security measures helps protect data integrity, privacy, and the overall functionality of the AI systems.
- What are the main security concerns for Edge AI?
– Answer: Main concerns include data breaches, unauthorized access to devices, tampering with AI algorithms, and attacks on data integrity. Edge devices are often less secure than centralized systems, making them potential targets for cyberattacks.
- How can organizations address these security challenges?
– Answer: Organizations can address these challenges by implementing robust encryption protocols, secure authentication mechanisms, regular software updates, and continuous monitoring of edge devices. Additionally, adopting a zero-trust security model and ensuring secure communication channels can help mitigate risks.
- What role do standards and regulations play in Edge AI security?
– Answer: Standards and regulations provide guidelines and best practices for securing edge devices and AI systems. They help ensure that manufacturers and developers adhere to minimum security requirements and provide a framework for addressing vulnerabilities and threats.
- How can AI itself contribute to enhancing Edge AI security?
– Answer: AI can be used to detect and respond to security threats in real time, analyze patterns of suspicious activity, and automate security measures. Machine learning algorithms can help identify anomalies and potential vulnerabilities, improving the overall security posture of edge devices.
- What are some emerging trends in Edge AI security?
– Answer: Emerging trends include the use of advanced encryption techniques, decentralized security models, AI-driven threat detection, and the integration of blockchain technology for secure data management and transactions.