ai-benchmark/tests/summarization/https___venturebeat.com_security_salesforce-research-across-the-c-suite-trust-is-the-key-to-scaling-agentic.txt
second_constantine 2048e4e40d feat: enhance summarization prompt and improve MongoDB test generation
- Updated summarization prompt to require Russian output and exclude non-textual elements
- Upgraded ollama dependency to v0.6.1
- Enhanced run.sh script to support both single record and file-based ID input for MongoDB test generation
- Updated documentation in scripts/README.md to reflect new functionality
- Added verbose flag to generate_summarization_from_mongo.py for better debugging
```

This commit message follows the conventional commit format with a short title (50-72 characters) and provides a clear description of the changes made and their purpose.
2026-01-23 03:49:22 +03:00

3 lines
12 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Partner Content Salesforce Research: Across the C-suite, trust is the key to scaling agentic AI VB Staff January 21, 2026 Presented by Salesforce In 2025, Salesforce conducted a series of C-suite research studies to capture if and how top decision-makers are building an agentic AI strategy. While the research shows positive signals like agent adoption is expected to surge 327% over the next two years, the dominant one is clear: leaders may be racing to deploy AI agents, but unlocking real value hinges on trust in data, systems, employees, and, above all, the leadership guiding the change. Trust is the connective tissue that determines whether companies can actually scale AI agents and unlock the value theyre projecting. At Salesforce, this trust imperative is operationalized through Agentforce. The Agentforce 360 Platform, the foundational layer of the company's agentic platform, embeds trust directly into how agents reason, act, and collaborate with humans. This ensures leaders can implement agentic AI at scale. "As organizations scale AI agents, trust becomes the accelerator," says Joe Inzerillo, chief digital officer of Salesforce. “When leaders trust their data, their systems, and their governance, AI moves from experimentation to enterprise impact. Trust isnt a constraint; its the foundation that allows companies to move faster, align teams, and unlock the full value of the agentic enterprise.” Trust is the accelerator — and the bottleneck Quality data, security, and employee adoption are the pillars of trust, according to the research among hundreds of CIOs, CFOs and CHROs: One of CIOs top two fears around AI implementation is a lack of trusted data 66% of CFOs say security or privacy threats keep them up at night regarding their AI strategy Chief HR Officers (CHROs) see trust through the lens of their people: 73% say their employees remain unaware of how AI agents will impact their work “Whats striking is how aligned leaders have become around trust,” Inzerillo says. “Whether its CIOs wrestling with data quality, CFOs scrutinizing security risk, or CHROs focused on employee adoption, the message is the same. Agentic AI only works when trust is built end-to-end technically, operationally, and culturally.” The good news is that a study from IDC found that preparation is key. CEOs prepared to implement AI agents/digital labor are nearly two times more invested in ethics, governance, and guardrails than those who arent prepared. To help leaders accelerate agentic AI value, the Agentforce 360 Platform embeds these key pillars of trust — data quality, security, and employee adoption — directly into its architecture. The Einstein Trust Layer ensures data security and accuracy through real-time grounding, while prioritizing employee adoption by embedding autonomous agents directly into the natural flow of work. With Agentforce, humans and agents are poised to work together. CIOs: Trusted data and context must be embedded into the flow of work CIOs are turning agentic AI ambition into action - the Salesforce study among this audience found that AI budgets have nearly doubled, with CIOs saying they are dedicating 30% of this budget to agentic AI. But data fears loom large and only 23% of CIOs are completely confident they are investing in AI with built-in data governance. Built-in is the key: 93% of CIOs say the successful adoption of AI agents in the workplace hinges on its integration within the flow of everyday work. As Salesforce CIO Daniel Shmitt points out, “Embedding AI into the flow of work and building trust into every step helps everyone move faster and with more confidence.” CFOs: Budgets rely on trust Today, CFOs are all-in on agentic AI. Five years ago, 70% were sticking with conservative AI strategies. That number has now plummeted to only 4%, with a third adopting an aggressive approach. But trepidation remains; 66% say security and privacy threats are their top concern, signaling that trust directly drives whether they approve budgets. “The introduction of digital labor isnt just a technical upgrade — it represents a decisive and strategic shift for CFOs,” said Robin Washington, president and chief operating and financial officer at Salesforce. “With AI agents, were not merely transforming business models; were fundamentally reshaping the entire scope of the CFO function. This demands a new mindset as we expand beyond financial stewards to also become architects of agentic enterprise value.” CHROs: Organizational resilience is essential Meanwhile, Chief HR Officers (CHROs) are working to build trust among teams in order to ensure employees feel confident that they will work alongside agents and be given opportunities to reskill and grow. In fact, 86% of CHROs say that integrating AI agents/digital labor alongside their existing workforce will be a critical part of their job and 81% of HR chiefs plan to reskill their employees for better job opportunities in the era of agentic AI. HR leaders are vital to ensuring organizational resilience since 73% say their employees dont yet understand how AI agents/digital labor will impact their work. “Were in the midst of a once-in-a-lifetime transformation of work with digital labor that is unlocking new levels of productivity, autonomy, and agency at a speed never before thought possible,” says Nathalie Scardino, president and chief people officer at Salesforce. “Every industry must redesign jobs, reskill, and redeploy talent — and every employee will need to learn new human, agent, and business skills to thrive in the digital labor revolution.” Embedding trust in AI technology “A trusted AI foundation gives companies the confidence to move quickly and scale AI responsibly across every workflow to power the agentic enterprise,” says Inzerillo. “When trust is built into the platform, teams can experiment continuously in production, with humans driving judgment and agents delivering scale.” The Salesforce trusted AI foundation delivers three core capabilities that make the Agentforce 360 Platform accurate, explainable, and secure, including: Context and accuracy, ensuring outputs are grounded in unified business data and knowledge. Built-in trust, security, and compliance to embed visibility, control, and compliance into every workflow. Developing an open and unified platform by connecting agents, data, and semantics across ecosystems to avoid lock-in and ensure consistency. “If we were building this technology ourselves, we would have to assume the burden of the cybersecurity, upkeep, maintenance, and scalability of it,” says Josiah Bryan, CTO and lead AI researcher at Precina. “Salesforce invests so beautifully and so heavily in cybersecurity that we can trust Salesforce to take care of our patients data as well as we take care of our patients.” “The agentic enterprise wont be won by the fastest model or the flashiest demo,” Inzerillo says. “It will be won by the companies that earn trust with their boards, their employees, and their customers, and can turn that trust, through platforms like the Agentforce 360 Platform, into velocity, quality, and measurable business value.” Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and theyre always clearly marked. For more information, contact sales@venturebeat.com. Subscribe to get latest news! Deep insights for enterprise AI, data, and security leaders VB Daily AI Weekly AGI Weekly Security Weekly Data Infrastructure Weekly VB Events All of them By submitting your email, you agree to our Terms and Privacy Notice. Get updates You're in! Our latest news will be hitting your inbox soon. Image: Nvidia Nvidia Rubin's rack-scale encryption signals a turning point for enterprise AI security Nvidia's Vera Rubin NVL72 delivers rack-scale confidential computing. After GTG-1002 proved adversaries can automate intrusions, CISOs need cryptographic proof, not contractual trust. Louis Columbus January 12, 2026 The 11 runtime attacks breaking AI security — and how CISOs are stopping them Field CISOs explain how inference security platforms stop prompt injection, model extraction, and 9 other AI runtime attacks as CrowdStrike reports 51-second breach times. Louis Columbus January 9, 2026 Seven steps to AI supply chain visibility — before a breach forces the issue Federal SBOM mandates stop at the model file. With 62% of enterprises lacking LLM visibility and EU AI Act fines reaching €35M, AI-BOMs are becoming a boardroom priority. Louis Columbus January 2, 2026 Legacy IAM was built for humans — and AI agents now outnumber them 82 to 1 CyberArk data shows machine identities outnumber humans 82 to 1, yet 88% of organizations still govern only human identities as privileged. Security leaders share what to do now. Louis Columbus December 30, 2025 Credit: Made by VentureBeat in Midjourney The enterprise voice AI split: Why architecture — not model quality — defines your compliance posture Enterprise voice AI has fractured into three architectural paths. The choice you make now will determine whether your agents are auditable, governable, and deployable in regulated environments over the next two years. Ujas Patel December 26, 2025 Credit: Made by VentureBeat in Midjourney OpenAI admits prompt injection is here to stay as enterprises lag on defenses OpenAI acknowledges prompt injection will never be fully solved. VentureBeat data shows most enterprises remain unprotected against the threat. Louis Columbus December 24, 2025 Credit: Created by VentureBeat in Midjourney Red teaming LLMs exposes a harsh truth about the AI security arms race Unrelenting, persistent attacks on frontier models make them fail, with the patterns of failure varying by model and developer. Red teaming shows that its not the sophisticated, complex attacks that can bring a model down; its the attacker automating continuous, random attempts that will inevitably force a model to fail. Louis Columbus December 22, 2025 Credit: VentureBeat, generated with MidJourney Enterprise AI coding grows teeth: GPT5.2Codex weaves security into large-scale software refactors With the recent release of GPT 5.2, OpenAI updated other related models, including its popular coding model Codex, bringing more agentic use cases to its fold. Emilia David December 18, 2025 Credit: VentureBeat made with Nano Banana Pro on Fal.ai Echo raises $35M to secure the enterprise cloud's base layer — container images — with autonomous AI agents As enterprises accelerate the deployment of LLMs and agentic workflows, they are hitting a critical infrastructure bottleneck: the container base images powering these applications are riddled with inherited security debt. Carl Franzen December 16, 2025 How Anthropic's safety obsession became enterprise AI's killer feature Conventional wisdom says enterprises choose AI models based on their current and potential capabilities. The market says otherwise. Anthropic now commands 40% of enterprise LLM spend versus OpenAI's 27%, a complete reversal from 2023. The reason isn't that Claude is smarter. It's that Claude is more predictable. Louis Columbus December 16, 2025 Created by VentureBeat using Imagen Anthropic vs. OpenAI red teaming methods reveal different security priorities for enterprise AI Model providers want to prove the security and robustness of their models, releasing system cards and conducting red-team exercises with each new release. But it can be difficult for enterprises to parse through the results, which vary widely and can be misleading. Louis Columbus December 4, 2025 Created by VentureBeat using Imagen AI models block 87% of single attacks, but just 8% when attackers persist One malicious prompt gets blocked, while ten prompts get through. That gap defines the difference between passing benchmarks and withstanding real-world attacks — and it's a gap most enterprises don't know exists. Louis Columbus December 1, 2025
==============