Research Shows How a Shifting Landscape is Driving SBOMs

The use of AI is having positive and concerning impacts.

Coding

Cybersecurity solutions provider Black Duck recently unveiled their The State of Embedded Software Quality and Safety 2025 report, which showed how AI is redefining the embedded software landscape. Findings include:

  • 89 percent of responding developers and security professionals use AI assistants and 96 percent embed open source AI models.
  • Weak governance leaves 21 percent uncertain about stopping vulnerabilities, and Shadow AI, developers using tools against policy, impacts 18 percent.
  • 71 percent of organizations now produce Software Bills of Materials, led more by customer/partner demand than compliance.
  • 80 percent have adopted memory-safe languages, with Python overtaking C++ in some embedded contexts.
  • 86 percent of executives call projects successful, compared with just 56 percent of developers, highlighting an optimism gap with business risk.

The report’s findings on shadow AI being introduced at the developer’s desktop, and the need for continuous SBOM monitoring after deployment, prove that a shift left–only strategy is no longer sufficient. Risk is introduced, discovered, and must be managed across the entire software development life cycle, and in response, a modern strategy must shift everywhere.

A number of industry stakeholders shared their thoughts on the findings>

Diana Kelley, Chief Information Security Officer at Noma Security

"AI systems, and especially agentic tools, are fragile to certain kinds of manipulation because their behaviors and outputs can be drastically altered by malicious or poorly formed prompts. AI interprets prompts as executable commands, so a single malformed prompt can reasonably result in wiped systems. 

"Robust AI security and agentic AI governance has never been more critical, ensuring systems are not harmed due to AI agent system access.

"AI agents bridge the gap between LLMs, tools, and system actions. Agents can execute commands, often autonomously, or instruct tools to perform actions. If an attacker can influence the agent via malicious AI prompt, they have the ability to direct the system to perform destructive operations at scale with a much bigger blast radius than a traditional AI application."

Nicole Carignan, Senior VP, Security & AI Strategy, and Field CISO at Darktrace

"Before organizations can think meaningfully about AI governance, they need to lay the groundwork with strong data science principles. That means understanding how data is sourced, structured, classified, and secured—because AI systems are only as reliable as the data they’re built on.

"For organizations adopting third-party AI tools, it's also critical to recognize that this introduces a shared security responsibility model—much like what we’ve seen with cloud adoption. When visibility into vendor infrastructure, data handling, or model behavior is limited, organizations must proactively mitigate those risks. That includes putting robust guardrails in place, defining access boundaries, and applying security controls that account for external dependencies.

"As organizations increasingly embed AI tools and agentic systems into their workflows, they must develop governance structures that can keep pace with the complexity and continued innovation of these technologies. But there is no one-size-fits-all approach. 

"Each organization must tailor its AI policies based on its unique risk profile, use cases and regulatory requirements. That’s why executive leadership for AI governance is essential, whether the organization is building AI internally or adopting external solutions.

"Effective AI governance requires deep cross-functional collaboration. Security, privacy, legal, HR, compliance, data, and product leaders each bring vital perspectives. Together, they must shape policies that prioritize ethics, data privacy, and safety—while still enabling innovation. In the absence of mature regulatory frameworks, industry collaboration is equally critical. Sharing successful governance models and operational insights will help raise the bar across sectors.

"As these systems evolve, so must governance strategies. Static policies won’t be enough, AI governance must be dynamic, real-time, and embedded from the start. Organizations that treat governance and security as strategic enablers will be best positioned to harness the full potential of AI safely and responsibly."

Guy Feinberg, Growth Product Manager at Oasis Security

"AI agents, like human employees, can be manipulated. Just as attackers use social engineering to trick people, they can prompt AI agents into taking malicious actions. The real risk isn’t AI itself, but the fact that organizations don’t manage these non-human identities (NHIs) with the same security controls as human users.

"Manipulation Is inevitable. Just as we can’t prevent attackers from tricking people, we can’t stop them from manipulating AI agents. The key is limiting what these agents can do without oversight. AI agents need identity governance. They must be managed like human identities, with least privilege access, monitoring, and clear policies to prevent abuse. 

"Security teams need visibility. If these NHIs were properly governed, security teams could detect and block unauthorized actions before they escalate into a breach. Organizations should:

  • Treat AI agents like human users. Assign them only the permissions they need and continuously monitor their activity.
  • Implement strong identity governance. Track which systems and data AI agents can access, and revoke unnecessary privileges.
  • Assume AI will be manipulated. Build security controls that detect and prevent unauthorized actions, just as you would with phishing-resistant authentication for humans.

"The bottom line is that you can’t stop attackers from manipulating AI, just like you can’t stop them from phishing employees. The solution is better governance and security for all identities—human and non-human alike."

Mayuresh Dani, Security Research Manager, at Qualys Threat Research Unit

"In recent times, government mandates are forcing vendors to create and share SBOMs with their customers. Organizations should request SBOMs from their vendors. This is the easiest approach. There are other approaches where the firmware is dumped and actively probed for, but this may lead to a breach of agreements. Such activities can also be carried out in conjunction with a vendor’s approval.

"Organizations should maintain and audit the existence of exposed ports by their network devices. These should then be mapped to the installed software based on the vendor provided SBOM. These are the highest priority since they will be publicly exposed. Secondly, OS updates should be preceded by reading the change logs that signifies the software's being updated, removed.

"Note that SBOMs will bring visibility into which components are being used in a project. This can definitely help in a post compromise scenario where triaging for affected systems is necessary. However, more scrutiny is needed when dealing with open-source projects. Steps like detecting the use and vetting open-source project code should be made mandatory. Also, there should be a verification mechanism for everyone who contributes to open-source projects.

"Security leaders can harden their defenses against software supply chain attacks by investing in visibility and risk assessment across their complex software environment, including SBOM risk assessment and Software Composition Analysis (SCA). Part of the risk assessment should include accounting for upcoming EoS software so they can upgrade or replace it proactively."

Satyam Sinha, CEO and Co-founder at Acuvity

"There has been a great deal of information and mindfulness about the risks and threats with regards to AI provided over the past year. In addition, there are abundant regulations brought in by various governments. 

"In our discussions with customers, it is evident that they are overwhelmed on how to prioritize and tackle the issues - there’s a lot that needs to be done. At the face of it, personnel seems to be a key inhibitor, however, this pain will only grow. GenAI has helped in multiple industries from customer support to writing code. Workflows that could not be automated are being handled by AI agents. We have to consider the use of GenAI native security products and techniques which will help achieve a multiplier effect on the personnel.

"The field of AI has seen massive leaps over the last two years, but it is evolving with new developments nearly every day. The gap in confidence and understanding of AI creates a massive opportunity for AI native security products to be created which can ease this gap. In addition, enterprises must consider approaches to bridge this gap with specialized learning programs or certifications to aid their cybersecurity teams. 

"GenAI has helped in multiple industries from customer support to writing code. Workflows that could not be automated are being handled by AI agents.

"Moving forward, we must consider the use of Gen-AI native security products and techniques which will help achieve a multiplier effect on the personnel. This is the only way to solve this problem."

More in Cybersecurity