CRN’s 2024 Products Of The Year

CRN staff compiled the top partner-friendly products that launched or were significantly enhanced over the past year and then turned to solution providers to choose this year’s winners.

And The Trophies Go To…
The CRN 2024 Products of the Year awards honor the leading partner-friendly IT products as selected by the solution providers that develop solutions and services around these products and bring them to their customers.
CRN editors selected finalists in 30 technology categories from products that were newly launched or updated between September 2023 and September 2024. The categories range from mainstay channel products in enterprise networking, enterprise storage and SD-WAN to products in newer technology areas such as application performance/observability, artificial intelligence architecture and AI PCs.
We then asked solution providers to rate the products based on three subcategories: technology, revenue and profit, and customer need. Products with the highest overall score (the average of the three subcategory scores) in each product category was named the winner.
What follows are the winners, subcategory winners and finalists for 2024.

Application Performance and Observability
Winner Overall: IBM Instana Observability
IBM Instana Observability automatically discovers, maps and monitors all services and infrastructure components, providing complete visibility across an application stack. It continuously captures every trace, detects changes in real-time, and provides detailed insights to automate root cause detection and problem resolution. Instana’s approach to observability includes built-in automation, application and infrastructure context, and AI-powered intelligent actions.
IBM Instana Observability scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Dynatrace Unified Observability Platform
Finalist: Datadog Observability Platform
Finalist: Grafana 11
Finalist: New Relic Observability Platform
Finalist: Splunk Observability Cloud

Artificial Intelligence: AI PCs
Winner Overall: Acer TravelMate P4 14
The Acer TravelMate P4 14 laptop for business professionals is an AI-ready business laptop with Microsoft Copilot, TravelMate Sense, and PurifiedVoice with AI Noise Reduction. The product harnesses the power of the Intel Core processor with built-in vPro Essentials hardware security and Intel Unison phone integration capabilities.
The Acer TravelMate P4 14 scored highest overall in this product category and highest for technology and for revenue and profit.
Subcategory Winner – Customer Need: HP EliteBook 1040 G11
Finalist: Apple MacBook Pro
Finalist: Dell Latitude 7455
Finalist: Lenovo ThinkPad 14S Gen 6
Finalist: Samsung Galaxy Book4 Pro

Artificial Intelligence: Infrastructure
Winner Overall: Supermicro AS-4125GS-TNHR2-LCC
The Supermicro AS-4125GS-TNHR2-LCC is designed for large-scale and cloud-scale compute tasks in AI, high performance computing, AI/deep learning, and deep learning/AI/ML development. The rackmount, liquid-cooled server runs on Nvidia H100 GPU processors.
The Supermicro AS-4125GS-TNHR2-LCC scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Dell PowerEdge R760xa
Finalist: Lenovo ThinkSystem SR780a V3

Artificial Intelligence: Productivity Suites
Winner Overall: Gemini For Google Workspace
Google Gemini is an AI-powered assistant that helps with a variety of tasks including writing, coding, research, data analysis and design. Gemini is integrated into Gmail, Docs and Sheets.
Gemini for Google Workspace scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Microsoft Copilot

Big Data
Winner Overall: HPE Ezmeral Data Fabric Software
Hewlett Packard Enterprise’s HPE Ezmeral Data Fabric Software is a platform for data-driven analytics, machine learning and AI workloads. It serves as a secure data store and provides file storage, NoSQL database, object storage and event stream capabilities. The product reduces data silos with a unified data lakehouse, and centrally manages and governs data while accessing it directly where it resides.
HPE Ezmeral Data Fabric Software scored highest overall in this product category, highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Cloudera Open Data Lakehouse
Finalist: Databricks Data Intelligence Platform
Finalist: Microsoft Intelligent Data Platform
Finalist: Snowflake Data Cloud
Finalist: Starburst Galaxy

Business Applications
Winner Overall: SAP S/4HANA
S/4HANA is SAP’s flagship ERP (enterprise resource planning) business application suite that provides finance, accounting, procurement, supply chain, production and employee management capabilities. The software uses AI and machine learning to analyze operational data and automate routine business tasks.
SAP S/4HANA scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Epicor ERP
Finalist: Oracle NetSuite
Finalist: Microsoft Dynamics 365
Finalist: Sage Intacct

Business Intelligence and Data Analytics
Winner Overall: MicroStrategy ONE
MicroStrategy ONE is a cloud-based, AI-powered business intelligence system that turns raw data into actionable insights. The software provides an array of role-based analytical capabilities and offers no-code, low-code and pro-code options for infusing analytics into business operations.
MicroStrategy ONE scored highest overall in this product category and highest for revenue and profit. It tied with the highest scores for customer need.
Subcategory Winner – Technology: Amazon Redshift
Subcategory Winner – Customer Need: Domo Data Experience Platform (tie)
Finalist: Google Cloud BigQuery
Finalist: Qlik Sense
Finalist: Salesforce Tableau

Data Protection, Management and Resiliency
Winner Overall: Veeam Data Platform
Veeam Data Platform data protection and management software is used by businesses to protect, backup, recover and manage their data across on-premises, hybrid and multi-cloud environments. The system protects data across a range of physical servers, cloud instances, applications and virtual machines.
Veeam Data Platform scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Cohesity Data Cloud
Finalist: Commvault Cloud Powered by Metallic AI
Finalist: Dell PowerProtect Data Manager Appliance
Finalist: HYCU R-Cloud
Finalist: Rubrik Security Cloud

Edge Computing/Internet of Things
Winner Overall: Scale Computing Autonomous Infrastructure Management Engine (AIME)
Scale Computing AIME provides the AI orchestration and management functionality within SC//HyperCore and significantly reduces the effort required to deploy, secure, manage and maintain IT infrastructure – including edge computing and IoT systems.
Scale Computing AIME scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Red Hat Device Edge
Finalist: Eaton iCube
Finalist: HPE Edgeline
Finalist: IBM Edge Application Manager
Finalist: Schneider Electric EcoStruxure Micro Data Center R-Series

Hybrid Cloud Infrastructure
Winner Overall: NetApp Hybrid Cloud
NetApp Hybrid Cloud combines public and private clouds, on-premises data centers and edge locations to run distributed workloads including web and content hosting, application development, data analytics and containerized applications.
NetApp Hybrid Cloud scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: IBM Hybrid Cloud
Finalist: Dell Technologies Apex Hybrid Cloud
Finalist: HPE GreenLake
Finalist: Nutanix Hybrid Multicloud
Finalist: VMware Cloud Foundation

MSP Platforms
Winner Overall: Kaseya 365
Kaseya 365 is a subscription-based service for MSPs that provides the core functions needed to manage, secure, backup and automate endpoint devices. The company recently introduced Kaseya 365 User to protect user data and identities in Microsoft 365 and Google Workspace environments.
Kaseya 365 scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: N-able Cloud Commander
Finalist: Atera
Finalist: ConnectWise Asio Platform
Finalist: HaloPSA
Finalist: Syncro Platform

Networking – Enterprise
Winner Overall: Cisco Networking Cloud
Cisco Networking Cloud is Cisco’s AI-native platform built for the global area network that provides unified management, automation, and operational simplicity and security by converging and connecting fragmented on-premises and cloud networks.
Cisco Networking Cloud scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: HPE Aruba Networking Enterprise Private 5G
Finalist: Juniper AI-Native Networking Platform
Finalist: Nile NaaS
Finalist: Prosimo AI Suite for Multi-Cloud Networking

Networking – Wireless
Winner Overall: HPE Aruba Networking Wi-Fi 7 Access Point
HPE Aruba Networking Wi-Fi 7 access point networking devices provide AI-ready, high-performance and secure connectivity for enterprise applications, edge IT and IoT devices. HPE Aruba says the Wi-Fi 7 access points provide up to 30 percent more capacity for wireless traffic than competing products.
HPE Aruba Networking Wi-Fi 7 access point scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Extreme Networks AP5020 universal Wi-Fi 7 access point
Finalist: Fortinet FortiAP 441K Wi-Fi 7 access point
Finalist: Zyxel Wi-Fi 7 access point

Power Protection and Management
Winner Overall: Eaton 9PX 6kVA Lithium-Ion UPS
Eaton touts the 9PX 6kVA Lithium-Ion UPS as ideal for enterprise IT, edge deployment and light industrial applications. It features remote firmware upgrades and integration with leading hyperconverged infrastructure and virtualization platforms.
The Eaton 9PX 6kVA Lithium-Ion UPS scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Vertiv Liebert GXT5 Lithium-Ion UPS
Finalist: CyberPower PFC Sinewave 1U UPS
Finalist: Schneider Electric Easy UPS 3-Phase 3M Advanced

Processors – CPUs
Winner Overall: Intel Core Ultra Series
Intel describes its Core Ultra processors as its premier processor line for desktop systems and mobile devices for enabling AI experiences such as copilots, productivity assistants, text and image creation, and collaboration.
Intel Core Ultra Series scored highest overall in this product category and highest for customer need.
Subcategory Winner – Technology: Apple M3
Subcategory Winner – Revenue and Profit: AMD Ryzen Pro 8040 Series
Finalist: AmpereOne
Finalist: Qualcomm Snapdragon X Elite

Processors – GPUs
Winner Overall: Nvidia H200
With its higher performance and expanded memory bandwidth and capacity, Nvidia’s H200 Tensor Core GPU is a popular processor for GenAI and high-performance computing workloads. Nvidia says the H200 also offers improved energy efficiency and lower TCO than the earlier H100.
The Nvidia H200 scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: AMD Instinct MI300X
Finalist: Intel ARC A570M

Public Cloud Platforms
Winner Overall: Microsoft Azure
Microsoft Azure is one of the industry’s leading cloud platforms that provide services for building, running and managing cloud applications. The platform’s wide range of services include compute, storage, analytics and networking capabilities.
Microsoft Azure scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: Oracle Cloud Infrastructure (OCI)
Finalist: Amazon Web Services
Finalist: CoreWeave Cloud
Finalist: Google Cloud Platform
Finalist: Snowflake Data Cloud

SD-WAN
Winner Overall: HPE Aruba EdgeConnect SD-WAN
HPE Aruba EdgeConnect SD-WAN is a software-as-a-service wide area network that provides secure connectivity and data access for hybrid work environments. Features include secure access service (SASE), a single management interface for observing and controlling the WAN and SASE, real-time monitoring and virtual WAN capabilities.
HPE Aruba EdgeConnect SD-WAN scored highest overall in this product category and highest for revenue and profit.
Subcategory Winner – Technology, Customer Need: Extreme Networks Extreme Cloud SD-WAN
Finalist: Cisco Catalyst SD-WAN
Finalist: Fortinet Secure SD-WAN
Finalist: Palo Alto Networks Prisma SD-WAN
Finalist: Zscaler Zero Trust SD-WAN

Security – Cloud and Application Security
Winner Overall: SentinelOne Singularity Cloud Security
SentinelOne Singularity Cloud Security provides cloud security, threat detection and response for servers, virtual machines and containers. Part of SentinelOne’s Singularity platform, Singularity Cloud Security works across private and public clouds, including AWS, Azure and Google Cloud Platform.
SentinelOne Singularity Cloud Security scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: Palo Alto Networks Prisma Cloud
Finalist: CrowdStrike Falcon Cloud Security
Finalist: F5 Distributed Cloud Services Web Application Scanning
Finalist: Orca Cloud Security Platform
Finalist: Tenable Cloud Security
Finalist: Wiz Cloud Security Platform

Security – Data
Winner Overall: IBM Guardium Data Protection
IBM Guardium Data Protection protects sensitive data in the cloud and in on-premises systems. GDP automatically discovers and classifies sensitive data across an enterprise and provides data activity monitoring and analytics, near real-time threat response workflows, and automated compliance auditing and reporting.
IBM Guardium Data Protection scored highest overall in this product category and highest for revenue and profit.
Subcategory Winner – Technology, Customer Need: Zscaler Data Protection
Finalist: ForcePoint ONE Data Security
Finalist: Proofpoint Information Protection
Finalist: Rubrik Security Cloud
Finalist: Wiz DSPM

Security – Email and Web Security
Winner Overall: Mimecast Advanced Email Security
Mimecast Advanced Email Security is a comprehensive, cloud-based email security system that guards email from a range of cyberattacks including spam, viruses and malware. Capabilities include threat intelligence and protection, data leak prevention and secure messaging.
Mimecast Advanced Email Security scored highest overall in this product category and highest for revenue and profit.
Subcategory Winner – Technology, Customer Need: Cloudflare Application Security
Finalist: Abnormal Security Platform
Finalist: Akamai API Security
Finalist: Barracuda Email Protection
Finalist: Proofpoint Threat Protection

Security – Endpoint Protection
Winner Overall: Sophos Intercept X
Sophos Intercept X takes a comprehensive, prevention-first approach to security that Sophos says blocks threats without relying on any single technique. Intercept X provides endpoint detection and response cybersecurity using a wide range of tactics to stop ransomware, breaches, data loss and other advanced threats from impacting end users.
Sophos Intercept X scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: CrowdStrike Falcon Insight XDR
Finalist: Huntress Managed EDR
Finalist: SentinelOne Singularity XDR
Finalist: ThreatLocker Protect
Finalist: Trend Micro Trend Vision One

Security – Identity and Access Management
Winner Overall: CyberArk Workforce Identity
CyberArk Workforce Identity is a SaaS-delivered system to simplify identity and access management across enterprise systems. The product provide unified workforce and B2B access and identity management within a single offering.
CyberArk Workforce Identity scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Ping Identity PingOne for Workforce
Finalist: Okta Workforce Identity Cloud
Finalist: Microsoft Entra ID
Finalist: OpenText NetIQ Identity Manager
Finalist: SailPoint Identity Security Cloud

Security – Managed Detection and Response
Winner Overall: Huntress Managed Identity Threat Detection and Response
With Huntress Managed Identity Threat Detection and Response (ITDR), formerly MDR for Microsoft 365, Huntress threat experts monitor and respond in real time to critical security threats such as suspicious login activity, privilege escalation attempts, and email tampering and forwarding.
Huntress Managed Identity Threat Detection and Response scored highest overall in this product category.
Subcategory Winner – Technology: SentinelOne Singularity MDR
Subcategory Winner – Revenue and Profit: Sophos MDR
Subcategory Winner – Customer Need: Arctic Wolf MDR
Finalist: CrowdStrike Falcon Complete Next-Gen MDR
Finalist: ThreatLocker Cyber Hero MDR

Security – Network
Winner Overall: Cisco Hypershield
Cisco Hypershield is a distributed security architecture that protects networks, applications and workloads in data centers and cloud environments. Hypershield features an AI-native rules engine, autonomous segmentation and distributed exploit protection.
Cisco Hypershield scored highest overall in this product category and highest for technology.
Subcategory Winner – Revenue and Profit: Fortinet FortiGate (tie)
Subcategory Winner – Revenue and Profit: SonicWall Cloud Secure Edge (tie)
Subcategory Winner – Customer Need: Fortinet FortiGate
Finalist: Sophos XGS Firewall
Finalist: ThreatLocker CyberHero MDR
Finalist: WatchGuard ThreatSync+ NDR

Security – Security Operations Platform
Winner Overall: Arctic Wolf Security Operations
Arctic Wolf Security Operations, renamed Arctic Wolf Aurora Platform in November 2024, is built on an open XDR architecture and is a cloud-based platform that offers a range of services to protect against cyberthreats including managed detection and response, incident response, threat intelligence, managed risk, managed security awareness and a security operations warranty.
It scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: CrowdStrike Falcon Next-Gen SIEM
Finalist: Google Security Operations
Finalist: Microsoft Sentinel
Finalist: Palo Alto Networks Cortex XSIAM 2.0
Finalist: Splunk Enterprise Security

Security – Security Access Service Edge
Winner Overall: Palo Alto Networks Prisma SASE
Palo Alto Networks describes Prisma SASE as a complete AI-powered SASE solution that combines network security, SD-WAN and autonomous digital experience management in a single service. It incorporates the Zero Trust Network Access 2.0 architecture.
Palo Alto Networks Prisma SASE scored highest overall in this product category and highest for technology.
Subcategory Winner – Revenue and Profit: Zscaler Zero Trust SASE
Subcategory Winner – Customer Need: Fortinet FortiSASE
Finalist: Cato SASE Cloud Platform
Finalist: Cisco Secure Access
Finalist: Netskope One SASE

Storage – Enterprise
Winner Overall: NetApp AFF C-Series
The NetApp AFF C-Series storage array platform offers all-flash storage for data centers. The product is designed to provide economical, high-density storage for tier 1 and tier 2 data center workloads and to unify storage environments.
NetApp AFF C-Series scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: Pure Storage FlashArray//E
Finalist: Dell PowerStore
Finalist: HPE Alletra Storage MP
Finalist: Infinidat SSA Express
Finalist: Quantum ActiveScale Z200 Object Storage

Storage – Software-Defined
Winner Overall: Pure Storage Purity
Pure Storage Purity software unifies, manages and protects data in data centers, the cloud or at the edge. Capabilities include data management (including AI-driven array operations, monitoring, analysis and optimization), data replication, data mobility, data reduction, data encryption, disaster recovery and high availability.
Pure Storage Purity scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: DDN Infinia
Finalist: Dell PowerFlex
Finalist: HPE GreenLake for Block Storage
Finalist: IBM Software-Defined Storage

Unified Communications and Collaboration
Winner Overall: RingCentral RingCX
RingCentral RingCX is an AI-powered, omnichannel contact center system that combines voice, video and more than 20 digital channels in a single platform. Capabilities include intelligent virtual agent integration, analytics and reports, outbound dialing and agent scripting.
RingCentral RingCX scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: Intermedia Unite
Finalist: Cisco Webex
Finalist: Microsoft Teams
Finalist: Nextiva Unified Customer Experience Platform

5 Palo Alto Networks Executives Share Partner Opportunity In XSIAM, Prisma

‘As the scale of these transformations has increased, the SIs, the community and the partners have become even more important,’ Palo Alto Networks Chief Product Officer Lee Klarich tells CRN.

Palo Alto Networks CEO Nikesh Arora is setting the tone on the importance of solution providers for the security platform vendor’s go-to-market, with executives across the vendor’s product portfolio looking to grow the partner opportunity.
CRN recently spoke with top Palo Alto Networks executives during a press-and-analyst event at the vendor’s Santa Clara, Calif., headquarters, all of whom echoed Arora’s view that solution providers help achieve positive outcomes with customers and that, to be an effective channel partner today, one needs to robust advisory and consulting practices.
System integrator partners, in one example, are “able to help take a large enterprise through a platformization project,” Palo Alto Networks Chief Product Officer Lee Klarich (pictured) told CRN. “These are bigger projects. They’re more multi-faceted in nature. They have more of a consultative aspect to them. As the scale of these transformations has increased, the SIs, the community and the partners have become even more important in terms of how we help our customers go through this and give them the resources and expertise to supplement their own capabilities on a global basis.”
[RELATED: Palo Alto Networks CEO Arora: ‘The Role Of VARs Is Changing’]

Palo Alto Networks Partners

Palo Alto Networks has about 15,000 channel partners worldwide, according to CRN’s 2024 Channel Chiefs.
In another example of the importance of Palo Alto Networks partners, Amol Mathur, the vendor’s senior vice president and general manager for its cloud security platform Prisma Cloud, told CRN that cloud usage growth and growing adoption of artificial intelligence (AI) have given customers more assets that need securing.
“If I have a big cloud footprint, that’s where all the attackers are now pivoting to essentially with ransomware and so on,” Mathur said. “So in terms of demand, it’s off the charts.”
Here is a look at where five top Palo Alto Networks executives see the business opportunity for the vendor’s partner ecosystem.

The Power Of Partners In Platformization

Lee Klarich
Chief Product Officer
What I would call out is, in particular, the value of the larger, more system integrator scale partners, where they’re able to help take a large enterprise through a platformization project.
These are bigger projects. They’re more multi-faceted in nature. They have more of a consultative aspect to them. As the scale of these transformations has increased, the SIs, the community and the partners have become even more important in terms of how we help our customers go through this and give them the resources and expertise to supplement their own capabilities on a global basis.
There are some partners that can scale worldwide. But in many cases, there are specialists across different parts of EMEA (Europe, the Middle East and Africa). Even regionalization within EMEA. And Asia is sort of the same way. We have spent a lot of energy building out that broader ecosystem of partners to help with their customers. I’ll call that the transformation category.
Then there’s the category of, OK, even once that’s done, do you have all the expertise needed to then run and operate it? And so that’s more of the MSSP category that sits on top of it. We’ve been, at least from my perspective, starting to do a better job of making sure that we’re strategically aligned with our partners in that category.
Take XSIAM (extended security intelligence and automation management) as an example. It enables the SOC (security operations center) to be run very differently. And so it’s important that as we approach our MSSP partners relative to managed SOC services that they’ve strategically aligned with our approach.
It doesn’t really work super well to take a managed SOC service designed for legacy technology, put them on top of XSIAM, and take that to a customer. Customer gets confused. … In a way, we’ve sort of gotten picky as to who best aligns with how our products work and therefore adds the most value when running on top of and/or integrated with our products.
(As for future product iteration, we’re) starting to get more prescriptive around–what does ‘great’ look like. What do we believe a really good security architecture looks like? What is a really good security outcome? And then how do we align everything we do toward those outcomes, whether it is what we as Palo Alto Networks do or our partners, managed service provider partners and everyone else, are they aligned with helping our customers achieve those outcomes as well?
Obviously with customization and other things like that. But it’s basically getting more opinionated on what a good outcome is and how to accomplish it, as opposed to saying, ‘Yeah, you can do whatever you want to.’ Technically they can. But in most cases, our customers actually want us to help them on how to achieve the right outcome and how to solve some of these problems that maybe they’ve been stuck on for quite awhile.

The AI Security Opportunity

Amol Mathur
SVP, GM, Prisma Cloud
We work with a lot of GSIs. We have a lot of partners who resell, who service a whole suite of products–not just cloud, but our soft products as well and so on.
When you look at cloud security, specifically, a number of stats will tell you the growth has been phenomenal of the underlying consumption. The pandemic hyper accelerated it. And now, with AI largely being available only in public cloud with GPUs (graphics processing units), even people who are not very cloud forward, they are moving faster than they would have because if they want to experiment and do something it’s always in the cloud.
So we are seeing a tremendous amount of need for security–everything from foundational cloud posture all the way to understanding in deep what data and AI assets are being used, all the way to ‘give me runtime threat prevention, detection and response capabilities.’
Because if I have a big cloud footprint, that’s where all the attackers are now pivoting to essentially with ransomware and so on. So in terms of demand, it’s off the charts. In terms of just the market growth.
The second thing is that, while cloud security (has) been around for five, six years, it’s still a market where a lot of customers don’t have the maturity. And when I say maturity, I mean everything from getting a cloud security program up and running and operational all the way to … monitoring, triaging attacks, helping with remediation at scale, putting architectures where you’re trying to prevent as many things and do secure by design and so on.
So there’s a lot of opportunity for experts to come in and advise customers on how to do this. … That’s a big trend that we are seeing, that there’s just not enough skill set. And a lot of the constructs that people use in cloud, they are very different.
When we think of, in the enterprise world, single-sign-on user identities–in the cloud world, there is a huge portion of non-user machine identities.
And how they get assumed and exploited and abused to gain access and breaches happen–that expertise needed is quite different. That’s where organizations that know how to do this can provide everything from basic deployment services, ongoing fine-tuning, all the way to fully managed cloud SOC, cloud posture, vulnerability management, et cetera.
Data and AI is a big area for us (looking ahead for product innovation). We have released solutions for data security posture management, AI security posture management. And with AI security posture management, related to that, very complimentary to that, our peers in the network security team have released AI access security and AI runtime security.
In most cases, even the companies don’t know how the data is being used by the AI model, because it’s kind of a black box.
(Another area is remediation at scale.) You need a strategy using AI to be able to remediate those issues in fewer steps. If you just go one by one, from top to bottom, you’ll never be able to resolve your issues.

The SASE And Prisma Access Opportunity

Anand Oswal
SVP, GM, Network Security
We’re seeing great activity from them (MSSPs) on all things SASE. We’re seeing great activity on GSIs (global system integrators) wanting to sell a unified SASE.
The GSIs, MSPs, MSSPs, all of them (are choosing Palo Alto Networks) because it allows them to differentiate their offerings. Networking is really coming together.
You think about just remote access, for the most part–(when) companies allow any remote access solution, Prisma Access or somebody else … they’re actually outsourcing a network because you’re no longer running a network like you did in the old days.
Some providers will use the cloud like we use–combination of different cloud providers for redundancy. Some will build their own through Equinix and leasing through things at scale.
Then you want to have combined visibility across networking and security and common logic to identify applications on your SD-WAN device and your Prisma Access device and be allowed to then add more services on top–like visibility from user applications. … The value add for the GSIs, MSSPs, MSPs is when you can unify. It also differentiates them.

Transforming The SOC With XSIAM

Scott Simkin
VP, Marketing, Next-Generation Security
(I think about) how am I going to provide an amazing platform, a set of products that we can give to a partner to deploy, operate, manage, and run 24 by seven on behalf of their customers.
The days of a VAR pushing paper, that’s not how you’re going to differentiate yourself. You have to build in value-added services.
When I’m talking to our partners in the big consulting firms, one of the things they always say is, ‘Every single company is asking the question of how do I transform my SOC?’ And that question is predicated on a few different things. The customer is sitting there going, ‘I have a mess of technology. I have far too much data and alerts than I can possibly deal with. And of course, I don’t have enough people. I need a better path.’
And when they look at everything, they need a third-party, trusted partner who says, ‘I can look at your environment. I can understand your challenges. I can understand your needs. And I can help you get from point A to B for transforming your SOC.’
They help them modernize and standardize on XSIAM so not only are they deploying a few different things. How do we stop the most possible–get to as close to 99.9999 percent–of attacks as possible? Then we say, ‘How do we prioritize the small few that get through with AI and analytics?’
And how do we give them full context, full visibility, understanding of what is occurring so they can paint the picture in their own SOC of what occurred–connecting the dots, so to speak–and then take an action. So when you talk to an MSSP or GSI, they’re doing all the hard work of architecting, designing and maybe even through deployment. But our play and a win-win for us–the customer and the GSI–is having the best AI-driven platform in the market.
How do we help build out the right systems, the right processes, and help them build their business on top of XSIAM? Partners out there are making more money, signing bigger deals and helping customers do significant transformations with XSIAM than a legacy SIEM could possibly get them in business.
Nearly every single customer we talk to and they will talk to is using a legacy SIEM. Whether it’s anyone who’s been around for 10, 15, 20 years. So the market is ripe for disruption. And what I want all the partners to think about is, hey, they can work with a best-in-class platform. They can work with a partner that they know is channel-led–unlike some of our competitors, who like to take things, perhaps, direct and not give them an at bat, which hurts their business, ultimately, at the end of the day.
And how do we deliver something that is truly, truly transformative? I was just talking to a customer … what they told me is, we’re in a situation where it is untenable for us in our security team. It’s untenable because the risk of a breach is so high, because of that underpinning of where’s the data coming from, what’s the system it’s going into? And then how do I throw dozens and dozens of people at a tool that ultimately doesn’t get them to an outcome that they want, which is, stop the adversary? Then I showed them XSIAM. … The second you show them a demo, their eyes light up.
Seeing is believing for XSIAM. And any customer who sees it, who has experience with a legacy SIEM, a first generation EDR or the other tools that sit within that SOC, they will absolutely fall in love. And it will be an outcome that they can count on.

The Cortex Opportunity

Elad Koren
VP, Products, Cortex
We’ve seen a lot of traction from partners. (Cortex, Palo Alto Networks’ XDR product) is something that is constantly getting improved, and we are seeing the motion with what they can do with the system, because of its flexibility, the scalability of the system–it’s huge.
We launched earlier this year the MSSP set of capabilities. And this was a result of the fact that a lot of the partners were asking for something like that so that they can actually operate based on that.
And we do see this being taken to the market with great results so far. So very exciting.

Exec Colleen Kapase Explains Why

Google Cloud’s global partner program leader, Colleen Kapase, explains Google’s new discount model for partners reselling Google Cloud Platform to enterprises as well as a new “nine-figure” partner fund slated for 2025.

Google Cloud is changing its discount model for partners reselling the Google Cloud Platform in large enterprise deals, which in some cases means removing the option for custom GCP discounting in multimillion-dollar deals.
However, Google Cloud top channel executive Colleen Kapase isn’t forcing enterprise customers to buy GCP directly from Google’s internal salesforce and is still encouraging partners to resell GCP in the SMB and mid-enterprise markets.
“We’re not saying, ‘Hey, you can’t resell here in any customer market.’ That being said, there are some simplifications and streamlining in the high-end of the market, where we’ve seen customers do have a preference for simplification, to work directly with us,” said Kapase, vice president of channels and partner programs for Google. “But partners can resell anywhere they choose from a segment size.”
[Related: 5 Huge Google Cloud Partner Benefits In New AI Agent Program: Kevin Ichhpurani]
Google Cloud’s reseller discount changes go into effect sometime during the second half of 2025, according to Kapase. No changes will be made to Google Workspace discounts.
New Nine-Figure Google Partner Fund Unveiled
The $46 billion cloud giant also revealed to CRN that it will be launching a nine-figure investment fund to drive partner services.
“This fund is going to provide incentives and rebates for qualified services work and support partners to continue helping drive proof of concepts, implementations, training on AI models, etc.” she said.
“We’re going to be moving to more of a co-sell motion with our services partners,” Kapase said. “This is where our sales team can work along with our partners, regardless of how the transaction is being done, and really keep bringing our partners in earlier in the sales cycle—very much with an intention for them to position their services delivery capabilities.”
Google Cloud has thousands of partners across the globe. The Google Cloud Platform is the third largest cloud platform on the planet, only behind Microsoft Azure and Amazon Web Services. Mountain View, Calif.-based Google reported record revenue of $11.4 billion during third quarter 2024, representing an increase of 35 percent year over year.
Kapase takes a deep dive with CRN around Google Cloud’s new GCP reseller discount change, what accounts are deemed enterprise by Google, the company’s rules of engagement with the channel, and details about Google’s new nine-figure channel funding.

For GCP resellers, will Google Cloud partners still be able to resell GCP in the enterprise market?
First and foremost, partners can sell up and down the stack—wherever they want to, wherever customers choose to integrate.
So we’re not saying, ‘Hey, you can’t sell here in any customer market.’ That being said, there are some simplifications and streamlining in the high-end of the market, where we’ve seen customers do have a preference for simplification, to work directly with us. But partners can resell anywhere they choose from a segment size.
We are doubling down on our focus in the mid market, in the SMB, in public sector, in many countries and markets where reselling [GCP] typically is a way of doing business together. Workspace and security continue to move in that way.
So we have made some changes. I’m not going to get into the specifics of what discounts changes have changed in that large deal space, but we have made some ways to simplify it. Customers have choice and partners have choice in any market that they want to engage in.
When does this change go into effect?
The change is happening in the middle of next year and the end of the following year.
So what we’re trying to do is give really long lead time for our partners, who maybe have been doing traditionally more reselling, to look at their business models and invest with us in the services opportunity. Again, a long lead time of when that change is going to come so folks can transition.

When you say it affects only ‘large deals,’ what is Google’s definition here as a large enterprises deal?
It’s really in the large end deals where you’d expect to see our sales team.
So we’re thinking deals of, you know, $5 million or more.
So once you’re getting into those strategic size deals, that’s where we’re making some of those [discount] simplifications.
With the new reseller discount structure, is there going to be one flat custom discount for partners to resell GCP?
We are just flattening and simplifying the way we do custom discounts at the top end of the market, in the largest deal sizes.
Really it’s just a simplification and a flattening of things at the higher end of those larger deals, when things can get a little bit more complicated from a deal structure perspective. Our sales team is probably even more needed in those engagements.
What I love about Google is just the amount of investment that we make, and the amount of free technical enablement that we provide our partners, to help them build up their bench of strength in the services side of the of the business. … Frankly, where we see the profit and where the monetization is moving with AI, it’s moving to the services side of the business.

Does this GCP discount change affect partners globally or in specific regions?
It is mostly global. There are some exceptions.
So LATAM [Latin America] is one market that is not going to have some of these discounting changes. Middle East and Africa as well are excluded. Also in a few countries in Asia like Taiwan, Korea and Japan, and that’s probably it.
Other than that, in the rest of the world, we are making these simplifications and changes.
What type of partner does this affect the most? Google’s global system integrators (GSIs)?
No, not really at all.
The reason is most of the time where global SIs and even the regional SIs are focused on, those system integration services, they’re doing direct deals with the customer on their services engagement.
The resale opportunity is not really what they’re engaged in, if you will. So we see this having little to no effect on them.
But for organizations that are smaller, but maybe focus more on resale, we do see that opportunity of moving to that managed service model where you’re offering AI as-a-Service as an interesting route to go. Where, if you still want that direct customer relationship, you can have that, but you’re literally offering a model as a service to customers.

What about GCP renewals? Let’s say a partner doesn’t have a GCP renewal until 2026. What’s going to happen with renewals after you implement this reseller discount change next year?
Some customers are going to choose to just do business as they have. They’re happy they got what they need, and they’re just continuing to grow.
Other customers are coming to us and the relationship is changing significantly.
The strength of Google Cloud with our data products, our offerings, our models, etc. there are relationships at the renewal point that are changing. There are customers coming and doubling down in us. The investments are getting to pretty strategic sizes.
There are some cases where some of those customers are going to want to evolve how they’re working with us and work with us in a direct model, and we’ll support that as well.
But again, if they’re doubling down on us, especially with the data strategy, they’re going to need a services partner engaging with them to help them on that implementation.
So it doesn’t mean that a partner is not involved. It just means that the way that we’re working together is going to change slightly.
So Google will still abide by a partner’s renewal contract once this goes into effect?
Correct. Customer choice is first and foremost.
If that deal is going to really expand strategically and change, then the simplifying discounts will allow that customer to come and work with us too.
There are, frankly, a lot of customers that are really tripling down on their investment in Google Cloud. It’s a pretty exciting time to be a partner at Google Cloud right now.

What is Google Cloud changing in terms of changing your discount structure for partner reselling GCP?
We’re simplifying our discounting structure and we’re bringing in more predictable pricing. So we’re bringing our discounting model, especially for resellers, and really working to standardize it and simplify it.
We are really focus on those partners delivered services. You’re going to hear us talking more and more about streamlining from a co-sell motion. This is where partners can continue to work with us, and customers can continue to work with us, and we can work together in this co-sell.
Some may say, ‘Hey, does that mean Google is moving away from resale?’
Reselling is still really important to us. One of the areas where we’re doubling down on is our Google Cloud Marketplace, and that’s continuing to be in destination for buyers and sellers—both our own first party products and third party products. We’re continuing to look at different ways that we can get partners more and more engaged with that motion.
We’re going to be moving to more of a co-sell motion with our services partners. This is where our sales team can work along with our partners, regardless of how the transaction is being done, and really keep bringing our partners in earlier in the sales cycle—very much with an intention for them to position their services delivery capabilities. We want to get to delighted customers as fast as we possibly can.

Are Google Cloud’s internal sales force rules of engagement with partners changing for GCP resellers? Can you explain what those rules of engagement will look like in the second half of 2025?
Customers always have choice and they can engage with partners in any way. When we’re looking at co-selling, we’re actually creating new programs and new systems to have partners be able to tell us, ‘Hey, I’ve got this opportunity. I’d like to work with your sales team.’ Because we still need to be in this together with our partners.
We’ll have 100 percent of our opportunities having a partner participate.
What’s just going to change is who’s doing what? We’re very much looking at partners who have invested in technical resources, have pre-production solutions on AI, and engaging with them earlier in this opportunity.
Many of these partners bring decades of industry specific data, machine learning, data science experience. We’re bringing them into these deals earlier to help work with us on these opportunities, to identify these opportunities and to help close these opportunities. We’re shifting a ton of our funds into helping the implementation of these services too.
So our engagement across our sales teams and our partners actually is going to strengthen and deepen, because you have to be more engaged in an opportunity together to understand—not just how am I selling this opportunity—but how am I selling, delivering and delighting a customer.
So there isn’t rules of, ‘Work less with partners.’ If anything, we’re focusing on systems and programs of how do we work earlier with partners, earlier in sales opportunities, and engage with the right partners who have the right expertise.
So to wrap it up, enterprises can still buy GCP from a Google partner reseller?
Correct. We’re not restricting where they can buy. We are just changing and simplifying on large deals.
So there may be more options for customers, ‘Do you want to work direct with us?’ But in no way are we saying, ‘You can’t resell here.’

Talk about Google’s new nine-figure fund over the next three years. What should a partner be doing right now if they want to capture some of that funding?
We continue to see this new business model evolve with the birth of AI.
Managed service providers have been around for a while, but now we’re continuing to see more often, AI-as-a-Service, SecOps and Security Ops-as-a-Service.
So partners who truly meet that bar of end to end services offerings for customers, that’s something we’re going to keep investing in and investing our resources to help in terms of developing those practices, and help customer expansion of those practices.
Because there are some customers that are just saying, ‘Hey, this is getting complicated. This is difficult. I want to see the advantages of SecOps. I want to see the advantages of AI. I don’t necessarily have the skill set or the resources within my own company to manage that. So I’d like to outsource that to a third party.’
So that’s why we see that as such a huge investment opportunity for us in terms of partners that are offering that to customers. There’s a lot of customers asking for that level help.
When does that nine-figure-fund kick in for partners?
The beginning of the fiscal new year in January.

66Degrees CEO On How Google AI Sales Grew 325 Percent

Top Google partner 66degrees takes a deep dive with CRN around how the solution provider’s AI sales spiked 325 percent in 2024. ‘If you and I are in a brainstorming session, we’re going to come up with 10 good ideas. But if you can have a 5,000-person company come up with ideas, your use cases are just going to flow out,’ says CEO Ben Kessler.

One of the nation’s top Google AI consulting and services partners, 66degrees, is witnessing a massive 325 precent AI sales spike this year as it pivots customers from AI proofs of concept into production.
“Market trend No. 1 is we’ve moved from POC and trying the art of the possible into picking a few lanes in revenue generation or cost reduction where customers are applying AI to test that,” said Ben Kessler, CEO of Chicago-based 66degrees.
Kessler said Google’s market position and strategy for enterprise search and agentic AI, or AI agents, “have a ton of legs.”
“Those are two areas where people can look and touch and feel AI—who are not familiar with AI—but can think of the AI use case in their business,” said Kessler. “If you can get people to adopt AI in your business, the use cases will come and the use cases will spread.”
[Related: Google Cloud Launches AI Agent Partner Program To Drive GenAI Sales, Customer Growth]
“For our clients that are very advanced from an AI standpoint, they’ve given AI into the fingertips of their employees. They said, ‘We’re going to give you these tools. We’re going to give you Gemini for Workspace or Copilot. We’re going to give you a variety of different things so that you can start experimenting,’ said the 66degrees CEO.
“Because if you and I are in a brainstorming session, we’re going to come up with 10 good ideas. But if you can have a 5,000-person company come up with ideas, your use cases are just going to flow out,” he said. “So the really forward-leaning CIOs have said, ‘Let’s get AI into as many people’s hands as we can and familiarize and train with it. The adoption will go up, but also the ideas in our organization will go up.’”
66degrees And Google Cloud
66degrees is a leading AI consulting and data services solution provider and one of Google Cloud’s top AI channel partners, specializing in developing AI-focused, data-led solutions alongside Google technology.
In fact, the company won Google Cloud’s 2024 Expansion Partner of the Year Award for North America.
In an interview with CRN, Kessler takes a deep dive into why AI sales are skyrocketing in 2024, Google BigQuery momentum in migrating customers’ data estate to the cloud, Google’s AI agent charge and 66degrees’ winning AI strategy.

How much are AI sales growing for 66degrees?
From the end of 2022 to the end of 2023, our Google AI growth was around 600 percent. We couldn’t repeat that again. But if you look at the end of 2023 to the end of 2024, our AI practice, our AI revenue, is growing between 320 [percent] and 325 percent. This is measured by billable revenue for our AI/ML consultants.
It’s an exciting trend to see in the market as more companies move from POC to production use cases for their enterprise AI.
How did Google AI sales grow more than 3X in 2024?
We’ve got about 50 data scientists at 66degrees. It’s the fastest-growing area.
Our strategy breaks down into the business centers of where customers can benefit from AI. There are two strands. Strand No. 1 is: How can you have AI and GenAI, from an agent standpoint, interact with helping drive revenue in your business?
I’ll give you a couple of case studies. From prospecting to helping identify high probability, all the way to helping keep your current and active customers engaged and buying more, so everything in the revenue center is what we’re thinking about in one pillar.
Then the second pillar is how you can actually optimize your operations. That would come down to a variety of different things: Some are agents, some are just more traditional data science models.
That could be, ‘How do you get better at forecasting? How do you get better at pricing? How do you actually build out your supply chain and logistics arms with automating with data science models. How do you expand your employee productivity?’ And that is where AI agents would actually come into play. So, ‘How do you do this across your finance arm? Your HR arm?’ It’s relevant to every single business.
Google’s strategy plays to each of these things. Because what does a CEO or CIO care about? If you can propose a solution that increases an end client’s revenue or reduces cost or improves productivity—you’re going to catch a board’s ear as well as a CIO’s ear.

What are the biggest market trends driving sales for 66degrees? What’s working for you?
Our sales trends are three-fold. Enterprises moved last year from the POC, from an AI standpoint, into now actually moving into production—especially if there can be a business value attached to it.
We’re having a lot of our clients doing an ROI test on their AI. Market trend No. 1 is, we’ve moved from POC and trying the art of the possible into picking a few lanes in revenue generation or cost reduction where customers are applying AI to test that.
No. 2 is on the data side because you need good data to have good AI. I’m a consultant by background and started off my career in business consulting and finance. You can’t have bad data.
Where 66degrees has had lots of tailwind at this point is deploying ROI-driving AI. But our stumbling block is if a company doesn’t have the data ready, if their data isn’t modernized and their data isn’t there—the model will only go so far.
Google and other models right now are quite good. It comes down to the AI pull and then the data pull, companies actually having their data in a good place, but then being able to have plenty of data to basically train their final outset.
No. 3: We’ve seen some resurgence from just some cloud migration and actually doing some lift and shift and some application modernization. Some of that pull has been AI and data, but I also think that the interest rate environment and spending has loosened up a little bit.
People are saying, ‘Hey, if there can be an ROI on this, moving this or modernizing this application, I’m absolutely going to do that.’

In terms of getting customers’ data estate in order so AI can do its magic, what’s working for 66degrees on the Google side?
So the data products that Google is offering, first and foremost with BigQuery, are very good products. Best-of-breed products.
I look at AI as a three-legged stool. Is your data platform in order? Is your data state in order? That can be both from a data warehouse and a data lake standpoint. We’ve been very satisfied with the BigQuery product, especially how it integrates and centralizes people’s data, but that it integrates with other Google products or other large language models that are needed.
So No. 1, BigQuery is very good.
No. 2 is the large language model.
Then No. 3 is, from a RAG [retrieval-augmented generation] standpoint, the final mile, that’s where a partner comes in and not only moves and modernizes your data but sets up a RAG framework so you can frame the last mile model on your data. And you can have your application or your analytics tuned.
The other way Google is helping from a product standpoint is they’re helping fund us from a migration standpoint. There’s funding, especially if there is spend downstream from that. Google’s continued to be a good partner in that. Some of their targeted campaigns that they’re going to market with partners have been very successful.

Is there one particular campaign from Google that’s helping you?
Customer Experience Reimagined is one we’re focused on right now. There’s a dozen use cases.
What Customer Experience Reimagined means is using and leveraging the CCAI [Contact Center as-a-service AI] platform to develop and put agents into the market.
We’ve built AI agents for large theme parks that are sitting on Google, which is enhancing the guest experience. For travel and tourism, we’ve built the customer experience that before you get to the resort or hotel, you’re engaging with a Google AI agent that is extending your experience.
So we’re able to push our solutions into the market with the back end of an agent that’s built initially with Google technology but uses [66degrees] as the final mile for us to help execute and train the Google model on that client’s use case. It’s actually quite effective.
The three-legged stool is BigQuery, Gemini or a Google model as the large language model. But then, the final mile,is really important for us. Because oftentimes, when you pull a large language model out of the box, it’s not super helpful. You have to train the model on your data and your intended use case.

One of Google’s biggest pushes in AI right now is AI agents. You said customers are buying into agentic AI or AI agents because it either drives revenue or saves them money. First, can you talk about how AI agents are creating new revenue for customers?
We talked about the agents at these various different travel and tourism use cases. We’ve also done this in financial services, manufacturing and in the energy industries, where agents are helping customer service representatives or helping the customer do one of a variety of things.
In manufacturing, logistics and energy, the agent is helping the customer service rep remediate an issue. For example, the power goes out or they can’t find a replacement part. It’s helping the customer service agent be faster and quicker and more responsive.
In customer-facing, this is true in financial services. It’s the agent interacting directly with the customer. So the customer is having a conversation with the AI agent.
If you can actually fine-tune the agent to understand—and we’re working with even large telecommunications customers on this—if you can train that agent on the customer’s data, it can actually have a more fine-tuned and better customer experience than if you or I were actually serving that customer.
If you think about resorts, or if you’re on a cellphone and you’re interacting with this agent, it actually is much more fine-tuned in the tone and the data.
We can know your data about what you did last vacation and about what experiences you liked. This is true in telecom companies. This is true in banks. This is true on large ships. This is all about using Google technologies to help improve the revenue for each of these customers. This is why we’re growing at 325 percent this year.

Where are customers buying AI agents in terms of cost savings?
On the cost side, I’ll give a couple of different examples.
A finance and procurement office will always need help around forecasting business demand. If you think about the job of a financial analyst, which was my job when I first started off, it’s processing data. It’s processing information and lots of structured and unstructured data.
We have helped a number of different clients actually build data science models to process and help that data. It’s not replacing the analyst. It’s just allowing the analyst to [have] better insight. And that could be on pricing, and that could be on demand forecasting, etc.
In many ways, the agent in this case—based upon that data science model—is making recommendations to the human agent about how the outlook will come across. Imagine if you were able to develop a forecast over the next six or 12 months that wasn’t just based upon your financial analyst but was able to take a data science model and completely process it.
It’s actually accelerating and improving the accuracy of that demand. Or it’s improving the accuracy of a raw material input into this where you can say, ‘I can be more fine-tuned.’ Essentially, what [AI agents] do is, it’s not taking the human decision out of this—it’s improving human decisions with the correct data.
So if you think about what data science is doing and what agents on the face of this are doing, it’s improving the life of the financial analyst so that they can make better, more well-informed decisions.

What’s another example around AI cost savings?
We have a number of clients asking us to put an enterprise search capability on their HR and their finance and their information [departments].
This is a little bit more enterprise AI than this is agentic AI. But what AI is allowing them to do is smart enterprise search to completely search through and have information faster. We’re seeing some clients actually using agents to suggest or make suggestions where there can be a conversation with this enterprise search.
So those are the couple of most popular use cases that we’re seeing: ‘How do you make the financial analyst’s job easier across a number of different metrics? And then No. 2 is, how do you actually remove some of the interface of the financial analyst or the HR person or the legal person, so that you can actually do some disintermediating and get to the information faster?
At the end of the day, what you’re doing is you’re democratizing data.

Cyera CEO On Raising $300M To Become The ‘End-To-End’ Data Security Platform

As the data security startup announced more funding and a $3 billion valuation, CEO Yotam Segev tells CRN that Cyera plans to continue consolidating more data and AI security tools onto its platform, ultimately providing customers with a ‘unified view of risk.’

Cyera plans to deploy its new $300 million round — the data security startup’s second fundraise of that amount this year — to continue expanding its data and AI security capabilities with the aim of becoming the most comprehensive platform in a red-hot market, Cyera Co-Founder and CEO Yotam Segev told CRN.
The three-year-old company announced the Series D round of funding Wednesday, which brings the startup’s valuation to $3 billion. That’s up from its $1.4 billion valuation achieved in April, when New York-based Cyera raised its prior $300 million round.
[Related: 10 Cloud, Data And Identity Security Startups To Watch In 2024]
In an interview with CRN, Segev said that the company’s data security posture management (DSPM) offering has seen massive demand from customers and partners amid the widespread enterprise push to adopt generative AI.
Cyera’s tool specializes in rapidly providing visibility into the status of an organization’s data and identity access — something that has seen surging interest as a means to enable usage of GenAI applications such as Microsoft 365 Copilot.
This need for securing data against exposure in a GenAI world has proven highly complex, however, in part because data is held in so many different places and the access to that data is often misconfigured.
Cyera’s DSPM technology aims to simplify matters with an agentless approach and through its ability to work across cloud environments, SaaS, data lakes and on-premises environments. The result is a “unified view of risk,” which enterprises have always wanted but have never been able to achieve, Segev said.
Cyera has not been content to stick with its core area of DSPM, though, and has recently expanded into its second major category with a move into data loss prevention (DLP). In October, the company acquired Trail Security for $162 million, which Segev said has brought a unique AI-powered approach to DLP onto the Cyera platform.
When combined with Cyera’s DSPM capabilities, customers now “can actually go and build a data security program” that is truly effective, he said.
“Suddenly, you’re able to actually make DLP work — because you know what you’re trying to protect, and because you know what the crown jewels are and where they reside,” Segev said.
With the help of the new funding, Cyera plans to continue enhancing its DSPM tool as well as adding further capabilities within the DLP sphere, he said.
From there, the company expects to continue introducing new functionality — including in areas such as privacy as well as in governance, risk and compliance (GRC) — so that it can cover as many data security needs as possible for customers, according to Segev.
The ultimate aim is “to consolidate the space and bring simplicity to a space that’s very siloed and complex,” he said.
And Cyera is planning to keep moving fast — to the point that “a year from now, I’d like to see enterprises that are running their data security program on Cyera, from DSPM to DLP to AI security to governance, risk and compliance, to privacy operations, to identity data access governance,” Segev said. “I want to see enterprises, big enterprises, that are running their program end-to-end on Cyera.”
During the interview with CRN, Segev also discussed Cyera’s channel strategy for 2025. Current partners of Cyera include GuidePoint Security, World Wide Technology and Trace3.
The new funding was led by Accel and Sapphire Ventures, while other participating investors in the round were Sequoia, Redpoint, Georgian and Coatue. Cyera has now raised a total of $760 million since it was launched in 2021.
What follows is an edited portion of CRN’s interview with Segev.
What has 2024 been about for Cyera?
We’ve had a very, very good year. The promise of AI changing, transforming and upgrading the enterprise has been a huge catalyst for us. Because when you think about AI, it fundamentally runs on two things — it runs on GPUs and it runs on data. And I think the lack of visibility, the lack of control, the lack of management of data in the enterprise, has really been exposed through this AI transformation.
To give an example that everybody can relate to — you work in a big enterprise, maybe you have access to data you’re not supposed to have access to in SharePoint or Office 365. That’s been a long-standing problem, but how would you ever get to it? How would you find it? And suddenly with [Microsoft] Copilot, you can query Copilot, “Who’s got HR violations in that company, and exactly what are they?” And if you have access to that information, you’re going to get an answer in five seconds with the exact details you wanted to find out. So the game has changed on data security.
What is your biggest differentiator from competitors when it comes to providing visibility into data?
Technologically, there’s two things that we’ve done that are unique and powerful in the market. The first one is the AI-powered classification. So essentially, it’s being able to learn the customers’ unique data types and contextualize those data types. Because it doesn’t end with, “what is the data?” There’s another set of questions you want to ask that, if the AI can answer them for you, you’re in a much better place and you can take much more action. Whose data is it? Is it synthetic data or is it real data? Does it belong to a European citizen or a German citizen or a Canadian citizen? And the AI can do much of that for us.
The second is the agentless, cloud-native connectors, which allow us to connect once to the underlying cloud provider — and from that single point of integration, unlock all of the different databases, buckets and warehouses that you have inside. Whereas in the past, you had to connect one by one over the network. That was extremely complex. A network-based approach that’s not agentless and not cloud-native just isn’t able to access the data in the different accounts. You have to be able to connect over the API into the accounts. Otherwise, it’s just not going to work. So this is a huge differentiator in the market.
So when an organization is thinking about all the things they need to do to keep their cloud secure, Cyera would be a piece of that?
I think that’s right — especially when you tie it into identity, like we have. [Before] the move to the cloud, from a consumption model perspective, [organizations] used to own all the layers of the stack. Then we gave the infrastructure away, and we gave the network away. We gave away more and more layers of the stack. And at the edge of it, we ended up with SaaS. And what do we control in SaaS? What data we put in it, and what access we grant to that data. And those layers are consistent across the entire shared responsibility model with the cloud providers — data and access to data. And that’s where Cyera is adding a ton of value to customers. We have customers today that have us connected to over 10 different environments — so their AWS, GCP, Azure, Snowflake, MongoDB, Databricks and their Office 365 and some Google Drive they got through acquisition. And some on-prem databases and fileshares that are left from the olden days. And Box and Salesforce. And so if you get a unified view of risk, and you’re able to build workflows across all of these systems, the impact of that is outstanding.
I remember when we started, I spoke with the head of security operations for Wells Fargo. He told me, “Yotam, you know what I hate about these SaaS security vendors? They show me the risk in Salesforce, and they show me the risk in Office 365, and they show me the risk in Atlassian. What do they think — that I have a security person for Salesforce and I have a security person for Office 365? Just show me the top 10 risks across all of my environment. And show me how to solve them.” And that’s exactly what Cyera is doing. When you look at cloud security, of course, you have the infrastructure and vulnerability aspect. But then you have the data and access aspect — what data lives where and who has access to it. And that’s such a big part of protecting these new ecosystems.
How is the Trail acquisition and your introduction of DLP expanding the opportunity for Cyera?
When I look at Cyera’s purpose in the industry, we want to be the one-stop shop for data security. We want to be the household name for data security. DSPM is such an amazing core to this. Because DSPM provides you with what we never had before — which is a full, comprehensive, automatic inventory of all the data, and specifically all the sensitive data, across the enterprise. With that, you can actually go and build a data security program. And suddenly, you’re able to actually make DLP work — because you know what you’re trying to protect, and because you know what the crown jewels are and where they reside. And you can build policies to either keep them there or to prevent them from going places where they’re not supposed to go, or to monitor how they’re moving in the environment. That was the biggest challenge with DLP. We were always able to do DLP for the most simplistic data types, and even that with a pretty high false positive rate. But what about the complicated data types? How do we deal with them? Whether it’s documents of different sorts, intellectual property, PII. How can we really build policies around those types of data — where they can reside, where they can go, who’s allowed to access them? And that’s where the Trail acquisition has really enabled us to go from DSPM to what we’re calling a data security platform — and not just show you where the data is and help you remediate it at rest, but also track it in motion, detect any violations in motion, and be able to put the preventative controls in place to make sure that you’re not subject to these types of incidents.
Is that opening some new doors for you?
Absolutely. Last week we had our first user conference, and the biggest piece of feedback that every practitioner was telling us is, “I can’t wait to go back to my CFO and tell him that it’s not just DSPM. It’s DSPM plus DLP.” It’s taking care of both of these problems in one unified platform.
This is just the beginning. This product portfolio is going to continue to grow. In conversations [with customers], the one thing you always hear is how many different use cases they need to deal with. And today, they have to stitch together 30-plus products in order to really build the program. That’s impossible to do. That has to be simplified, it has to be unified. And that’s where we’re leveraging the venture capital investment in order to consolidate the space and bring simplicity to a space that’s very siloed and complex.
What are some of the other goals for the funding?
This funding is allowing us to really invest in the product — not just in DSPM, which is our core and we’re doubling down on, but also to go into DLP through this acquisition, and to go into the identity space and answer the questions around data access governance. [We’ll] continue to make moves into adjacent spaces where the customers are asking us to consolidate these use cases and these requirements into one platform. And to have the resources to actually do that is amazing.
Are there other categories or tools that you’re interested in consolidating on the platform?
First of all, I think that in the DLP space, there’s a lot of depth. It’s not just one product or one solution. The Trail team is going to grow within Cyera to build multiple product lines in the DLP space. At the same time, we’re seeing a lot of requests from our customers to answer many of the other core data use cases — whether that’s the privacy requirements around their data, whether it’s GRC and compliance requirements around data, evidencing to auditors, compliance checks, compliance health. [These are] a lot of things that today are not part of what Cyera is putting out in the market, but that will quickly become [the case].
And of course, the biggest topic today is AI security. How do we protect an LLM that’s been built in the enterprise, and we want to put real data into it? And how do we control what happens to that data that the LLM spews out?
What is your channel strategy for 2025?
We’re taking a focused approach with our key partners. We want to make sure that we’re fully enabling their teams and getting into the market together with a high-touch approach. That’s where we are for next year in this regard. And I think that as we continue to mature that program and mature those relationships, we’ll be able to open up to more partners and really widen our scope of interaction to the wider channel community.
The channel, the value-added resellers, are amazing at supporting these implementations, adding value on top of these implementations — and really making sure that the customers are not just getting a product, but getting a solution to the problem they set out to solve. And we’re very fortunate to be working with [our current] partners.
Overall what do you see as the big theme for Cyera in 2025?
I think for us, DSPM is obviously very, very hot right now, and every enterprise is looking at a DSPM initiative. And that is the core of our business. That is where we’re spending the majority of our time and that’s where the majority of the development is.
Around that, the exciting avenues for us are obviously growing into the DLP space and challenging some of the incumbent vendors in that space with their offerings, and bringing to the table DLP that is AI-powered. And that is game-changing, and that’s exactly what Trail is about. We also have the ability to provide agentless DLP, which has a completely different time to value than the traditional DLP that the market is used to. Everybody wants DLP to be easier, and this is exactly that. This is the magic sauce that turns DLP into a very successful program quickly.
Around that, we have our identity focus and we have our AI security focus. Being able to answer the question — “who can access what data in the enterprise?” — it’s mind blowing. Practitioners have never seen a platform that is able to actually answer that question for them.
And on the AI security front, I think that every CISO is being challenged to support this huge transformation that’s happening. And the ability to really partner with the business, enable the business to move fast with AI, to run ahead — but to do it in a way that’s governed, secure, compliant — that’s also extremely meaningful to our customer base. And we’re very proud to be at the forefront of this challenge.
Looking out a year from now, what is your hope for Cyera and what things will be like at that point?
A year from now, I’d like to see enterprises that are running their data security program on Cyera from DSPM to DLP to AI security to governance, risk and compliance, to privacy operations, to identity data access governance. I want to see enterprises, big enterprises, that are running their program end-to-end on Cyera. That would be very, very exciting for me.

The Biggest News In AI, Copilots, Agents

Microsoft Copilot Actions, AI agents inside SharePoint and a new Azure AI Foundry experience are among the big reveals.

Microsoft Copilot Actions prompt templates. Artificial intelligence agents inside SharePoint. And a new Azure AI Foundry experience for designing and managing AI apps and agents.
These are some of the biggest new products and updates the Redmond, Wash.-based tech giant is revealing this week during its Ignite 2024 event.
Ignite runs through Friday, with programming in person in Chicago and online. Microsoft had 200,000-plus people register for the event and expected 14,000-plus in-person attendees.
[RELATED: Microsoft CEO: AI Provides ‘On-Ramp’ To Azure Data Services, Copilot Continues To Surge]

Microsoft Ignite 2024

In total, Microsoft revealed 80 new products and features across its product portfolio, a number of those focused on the emerging AI era.
About 70 percent of the Fortune 500 use Microsoft’s Copilot AI tool, according to the vendor. For every $1 invested, companies see a return of $3.70, with some of the highest returns reaching $10.
Microsoft also said that about 600,000 organizations have used Copilot in Power Platform and other AI-powered capabilities, up fourfold year over year.
Accenture, No. 1 on CRN’s 2024 Solution Provider 500, is in the process of rolling out Microsoft copilots and agents to 100,000 employees, according to Microsoft. It has a commitment to deploy 200,000 more.
AI looks to feature prominently for the vendor’s 400,000-member partner ecosystem in 2025. In Microsoft’s latest quarterly earnings call, Chairman and CEO Satya Nadella said that the company’s AI businesses should “surpass an annual revenue run rate of $10 billion next quarter, which will make it the fastest business in our history to reach this milestone.”
“When I talk about Copilot, Copilot Studio, agents, it’s really as much about a new way to work,” Nadella said on the call. “I describe it as what happened throughout the ’90s with PC penetration. After all, if you take a business process like forecasting, what was it like pre-email and Excel and post-email and Excel. That’s the type of change that you see with Copilot.”
Here are the biggest news items coming out of Ignite 2024 in AI and with Microsoft Copilot.

Copilot Actions, Copilot In Teams

Microsoft moved its Copilot Actions customizable prompt templates into private preview, the vendor announced during Ignite.
Users can leverage Actions to receive status updates and agenda items from colleagues and employees, compile weekly reports, schedule daily emails summarizing other emails and Microsoft Teams chats and more.
Actions users can automate templates on demand or with an event trigger. Actions can deliver information in an email, Word document and other specified formats, according to the vendor.
Microsoft will push new Copilot in Teams abilities into preview in early 2025, including a way for users to analyze screen-shared content in the collaboration platform and summarize file content in mobile and on desktop.
Screen-shared content will be available for Copilot summarizations, insight and for use when drafting new content, according to the vendor.
The new file summaries ability will apply to one-to-one and group chats in Teams. This feature will also follow file security policies so that users with unauthorized access don’t receive summaries.

New Microsoft 365 Agents

Microsoft introduced a host of AI agents during Ignite, with one such offering, Agents in SharePoint, entering general availability.
These agents are grounded on users’ SharePoint sites, files and folders to improve finding answers from that content, according to Microsoft. Every SharePoint site will include an agent tailored to its content. Users can also make their own agents scoped to select SharePoint folders, sites and files.
Users can give agents a name and behaviors and answer questions in real time, according to Microsoft. The SharePoint agents will follow existing user permissions and sensitivity labels.
Employee self-service agents have entered private preview. These agents in Microsoft 365 Copilot Business Chat (BizChat) can answer common policy-related questions and do some human resources tasks such as understanding a particular employee benefit, retrieving payroll information and starting a leave of absence.
These agents can also handle some IT tasks, including a request for a new laptop and assisting with a Microsoft 365 product. Users can customize these agents in Copilot Studio.
In preview are facilitator agents and project manager agents. Facilitator agents take notes in Teams meetings in real time and summarize information from Teams chats as conversations happen, according to Microsoft.
Project manager agents in Planner can create new plans and use preconfigured templates. The agent will oversee entire projects, assigning tasks, tracking progress and sending reminders and notifications. It can even complete tasks and create content.
Interpreter agents are expected to enter preview early next year. These AI agents can interpret up to nine languages in real time in Teams meetings. Meeting members can have the agent simulate their personal voice.

Azure AI Foundry

Microsoft introduced Ignite watchers to its Azure AI Foundry experience for designing, customizing and managing AI apps and agents.
Now available in preview are the Azure AI Foundry portal—the former Azure AI Studio—and the Foundry SDK.
The portal is the visual user interface for finding AI models, services and tools. Users can see subscription information in a single dashboard. IT administrators, operations personnel and those focused on compliance can manage AI apps at scale in the portal.
The SDK has a unified toolchain, 25 prebuilt templates and a coding experience users can access from GitHub, Visual Studio, Copilot Studio and other tools, according to Microsoft. Users can leverage the SDK for integrating Azure AI into their applications.
Coming soon to preview is the Azure AI Foundry Agent Service. This feature should allow developers to orchestrate, deploy and scale agents for automating business processes, according to Microsoft. Agent Service will allow for bring-your-own-storage and private networking for data privacy and compliance.
Foundry portal and SDK will gain a preview in December for Azure AI risk and safety evaluations for image content. These capabilities should help users assess the frequency and severity of harmful content in AI-generated outputs.
These evaluations will allow Azure AI to go beyond text-based evaluations and assess text inputs yielding image outputs, image inputs yielding text outputs and images with text—such as a meme—as inputs yielding text or images.
Users can leverage these evaluations for modifying multimodal content filters with Azure AI Content Safety and adjusting data sources for grounding. Users can also update system messages before deploying apps to production.

Copilot Control System

A Copilot Control System from Microsoft aims to help IT manage copilots and agents with data access, governance, security controls, measurement reports, business value tracking tools and adoption tracking tools.
One of the features of the Control System is Copilot in Microsoft 365 Administration Centers (MAC), now in private preview and set for general availability early next year, according to the tech giant.
Copilot in MAC leverages AI to do routine tasks by IT administrators and suggest ways to get more value out of M365 subscriptions. It will be available in the admin centers for M365, Teams and SharePoint and provide summaries of trends across an administrator’s assigned areas. The copilot can also summarize message center posts across all apps and services and meeting reports. It can troubleshoot call quality and other user issues with natural language.
Another feature in the Control System is Copilot Analytics. General availability capabilities within Copilot Analytics include a dashboard that covers Copilot readiness, adoption and learning and M365 admin center reporting to surface adoption and usage trends.
In early 2025, Copilot Analytics will include Viva Insights for no additional charge. Insights is a measurement toolset for productivity and business outcomes.

Copilot Studio Updates

Copilot Studio gained a multitude of previews, including ones for autonomous agentic capabilities, an agent library and an agent SDK.
The autonomous agents can take actions on a user’s behalf without prompting each time. These agents act in the background when recording an uploaded file, receiving an email and responding to events, according to Microsoft.
The autonomous agents plan, learn from processes, adapt to new conditions and make decisions.
The library has templates for leave management, sales order, deal acceleration and other common agent scenarios.
The SDK will allow developers to build multi-channel agents that leverage Azure AI, Semantic Kernel and Copilot Studio services and are deployable across Teams, Copilot, web and third-party messaging platforms.
More previews include image uploads for agents to analyze and advanced knowledge tuning to match specific instructions to unanswered questions,
Copilot Studio integrations with Azure AI Foundry will give Studio access to 1,800-plus models in the Azure catalog. A bring-your-own model capability is in private preview, as is the ability to embed voice-enabled agents in Studio and voice experiences in applications and websites.
A new pay-as-you-go consumptive billing option for Copilot Studio messages through existing Azure subscriptions will become available for users on Dec. 1.

Copilot Pages Upgrades

Microsoft has plans to make new features in its Copilot Pages content creation canvas generally available in early 2025, including rich artifacts and multi-page support.
The rich artifacts support means Pages will gain the ability to support code, interactive charts, tables, diagrams and math from enterprise data and web data, according to Microsoft.
Multi-page support will give users ways to add content from multiple chat sessions and from Pages made in previous Copilot conversations.
Other features entering general availability include grounding Copilot chat prompts on Page content as the page is updated for better answer relevancy and Pages viewing, editing and sharing on mobile.

Copilot in PowerPoint, Outlook, Excel

Copilot in PowerPoint should have some new features in 2025, including a narrative builder based on a file and organization image support.
Narrative builder based on referenced file will enter general availability in January, allowing for better first drafts of slides, according to Microsoft. Copilot will add branded designs, speaker notes, transitions and animations to the presentation.
In the first quarter, a capability for bringing images from SharePoint Organization Asset Library, Templafy and other asset libraries into Copilot in PowerPoint will enter general availability.
Microsoft will also increase access to presentation translations, with all Copilot in PowerPoint web users getting the ability to translate presentations into one of 40 languages in December. Desktop and Mac users will gain the capability in January.
By the end of the month, Copilot in Outlook will gain the ability to schedule focus time and one-on-one meetings based on a user prompt. Copilot will find the best times for everyone and draft an agenda based on the prompt’s details of the meeting
Before year’s end, Copilot in Excel will add a new start experience wherein Copilot suggests the type of spreadsheets users should make based on what they want. Copilot can also refine the template with headers, formulas and visuals.

Microsoft Places Enters General Availability

Microsoft revealed that its Places AI-powered workplace application has entered general availability, bringing location data to Teams and Outlook to help with in-office day planning.
Copilot can recommend when users should go into the office based on in-person meetings, guidance and when common collaborators will be in, according to Microsoft. It can manage room bookings for one-time or recurring meetings and help book rooms and desks based on images of the office and floor plans.
Administrators can leverage Places for an analysis of intended and actual occupancy for spaces.

Azure AI Content Understanding

Now in preview is the service Azure AI Content Understanding, which aims to assist developers in building and deploying multimodal applications.
The service uses GenAI to get information from documents, images, videos, audio and other unstructured data and put that information into customizable structured outputs, according to the tech giant.
Content Understanding has prebuilt templates and ways to customize outputs for call center analytics, marketing automation, content search and other use cases. The service also has prebuilt schemas for users to say what they want extracted from data, such as captions, transcripts, summaries, thumbnails and highlights.

Microsoft Fabric News

Microsoft’s Fabric data integration platform gained a host of new previews, including ones for Fabric Databases, SQL database in Fabric and open mirroring.
Fabric Databases aims to unite transactional and analytical workloads to improve app development optimized for AI databases, according to Microsoft. SQL database in Fabric is the first database engine in Fabric, with plans for Azure Cosmos DB and Azure Database for PostgreSQL to join.
SQL database in Fabric will allow for faster app building with data automatically replicated in Fabric’s multi-cloud data lake OneLake and native vector search capabilities allowing for retrieval augmented generation (RAG).
This capability will also allow for auto-optimizing databases, auto-scaling them and translating natural language queries into SQL with inline code compilation next to code fixes and explanations.
The goal of open mirroring, meanwhile, is to allow any app or data provider to bring the data estate into OneLake within Fabric so they can write change data into a mirrored database in Fabric.
A new OneLake catalog is also now generally available for exploring, managing and governing the Fabric data estate across notebooks, lakehouses, warehouses, machine learning models and more.

For AI Data ‘It’s Us Versus AWS’

‘MinIO embraced the S3 API as the standard. And today, MinIO’s adoption is larger than Amazon S3’s. Customers are in the cloud, across the cloud, in private clouds, all the way to edge. Our user application ecosystem is much larger than Amazon’s. We are the single largest player. We made the S3 API an industry standard. Now all these AI applications … are largely built on MinIO’s API,’ says AB Periasamy, MinIO CEO and co-founder.

MinIO is known for its Amazon Web Services S3-compatible object storage technology, and its CEO and co-founder, Anand Babu “AB” Periasamy, even goes so far as to claim data based on his company’s offerings the adoption of MinIO’s S3-compatible technology is now even larger than that of AWS.

Indeed, according to Periasamy, MinIO is the reason the AWS S3 standard has been adopted industry-wide, outside of the competitive cloud hyperscalers.
“Our user application ecosystem is much larger than Amazon’s,” Periasamy said. “We are the single largest player. We made the S3 API an industry standard. Now all these AI applications, because they are modeled after the cloud infrastructure, are largely built on MinIO’s API. [For example,] underneath all the popular vector databases is MinIO.”
[Related: Storage 100: The Digital Bridge Between The Cloud And On-Premises Worlds]
MinIO is focused on developing storage technology for AI and GenAI, an area where consolidating data from multiple sources and multiple clouds has become a critical first step, Periasamy said.
“Customers see if data is in different storage technologies, some here, some there, they can’t bring it to an AI practice,” he said. “For generative AI, they realized that buying more GPUs without a coherent data strategy meant GPUs are going to idle out. So they are now starting to consolidate data from different teams, different technologies, under one AI data infrastructure.”
For that reason, MinIO has developed strategic relationships with all the key AI infrastructure players, including Nvidia, Intel, and AMD, Periasamy said. Most recently, the company has initiated a relationship with processor maker ARM, an up-and-coming AI infrastructure company, and has expanded its Intel relationship. Intel is even an investor in MinIO.
There’s a lot going on at MinIO. For more information, read CRN’s full Q&A with Periasamy, which has been lightly edited for clarity.
How do you describe MinIO?
MinIO is an object store. It’s a data store for storing AI data. Our closest alternative is Amazon S3, but that’s a service. If you have data and want to store it in AWS S3, you have to take it to the cloud and to AWS. We are software, and you take our software to wherever there is data and build your data infrastructure. MinIO grew to become the most popular object store of choice. Before the cloud, people were storing their data on SAN, NAS, and virtual machines. But the traditional enterprise architecture is on its way out. Cloud is the gold standard for infrastructure. AWS is the blueprint.
Increasingly, we are seeing enterprises move towards AI and data as the heart of the business. AI was born in the cloud and is built around the cloud. Our customer base is bringing us and Kubernetes in. We became the AWS S3, and Kubernetes became the AWS EC2. Customers are taking us to their colos and data centers and building their AI and data infrastructures on top of us.
You said ‘object store’ and ‘AI’ a lot. Is MinIO’s object store aimed specifically at AI?
The company grew as a general-purpose object store. For the last 10 years, all kinds of use cases came on top of MinIO, from simple web development to log data processing, cybersecurity, and all kinds of analytics and database workloads. In the last three or four years, we started seeing increasing pull towards database, analytics, ML (machine learning), and deep learning. In the last 12 months, we saw a significant pull towards GenAI. Now the commercial focus of the company is on the AI data market. We are quickly scaling our business. Pretty much now it’s a fight between MinIO versus the public cloud. When you store even just hundreds of petabytes of data, it just becomes unsustainable. As you cross 40 or 50 petabytes, public cloud becomes very expensive. And as the scale of the data grows, customers are leaving public clouds and coming to MinIO. Now we are seeing significant commercial traction in AI workloads. It allows us to focus only on AI data workloads.
We also happen to be open source. We have an open-source version of MinIO which we offer for general purpose use. All kinds of people run it in very small use cases. Even the guys who run MinIO with hundreds of petabytes also run MinIO as a home NAS. They literally put MinIO on Raspberry Pi and a home NAS system like Synology or Qnap. It just runs everywhere on all kinds of devices. …
The commercial version of MinIO is focused on AI data use cases simply because we’re seeing traction in the AI market and the scale has grown. That also leads to bigger check sizes, so it’s certainly lucrative.
When you say AI, you’re talking specifically about GenAI?
Customers see if data is in different storage technologies, some here, some there, they can’t bring it to an AI practice. For generative AI, they realized that buying more GPUs without a coherent data strategy meant GPUs are going to idle out. So they are now starting to consolidate data from different teams, different technologies, under one AI data infrastructure. And now the chief data office is becoming an important business priority because when you consolidate all your data in one place for AI, they need to make a choice: public cloud or private cloud. Public cloud is where they initiated this movement, and now they are bringing it to private cloud. The first step is, consolidate all the data that is private to the enterprise because that’s its core asset. When you do, it quickly grows to hundreds of petabytes. Some customers are now reaching the exascale range. Even when a customer starts at 100 or 200 petabytes, they are telling us, ‘I need technology that can scale beyond exabytes, because that’s what we need to do for our AI infrastructure.’ We see a mix of all kinds of data. Some is stable data, like Apache Iceberg-type semi-structured open table format. Some is log data, audio, video, call center logs, customer support, conversations, and documents. All these data types are starting to be consolidated under one common catalog. Once that happens, businesses can talk about (building a GenAI practice).
You need to have an AI data infrastructure at the heart of your business. When you do this, what would you build it on? Not SAN, NAS, or traditional appliance models. You need to build it modeled on AWS. And that’s where it’s us versus AWS. Customers come to us and think of themselves as hyperscalers. They think of servers and drives as commodities. You go to a colo because what you want to own is your data. With vendors like Red Hat or OpenShift for Kubernetes and MinIO for object store, they can go to a colo and build their AI practice.
Is MinIO’s technology aimed at specific applications, or is it for any application that needs AI data?
We could have introduced yet another API, but it would be another standard. We needed to pick one. When we started, we made a conscious bet on S3. Amazon was the gold standard in the cloud. Within AWS, S3 was the de facto standard. But S3 was unknown outside of the public cloud. No applications existed. MinIO embraced the S3 API as the standard. And today, MinIO’s adoption is larger than Amazon S3’s. Customers are in the cloud, across the cloud, in private clouds, all the way to edge. Our user application ecosystem is much larger than Amazon’s. We are the single largest player. We made the S3 API an industry standard. Now all these AI applications, because they are modeled after the cloud infrastructure, are largely built on MinIO’s API. [For example,] underneath all the popular vector databases is MinIO. We’ve become the de facto standard.
MinIO has a couple bits of news. First, you’re working with ARM. What are you doing here?
On the compute side, we’ve seen the industry shift towards GPUs and other accelerators because modern workloads use a lot of vector processing. … The CPU era is coming to an end, and GPUs are taking over. ARM is a CPU, but it is a different architecture. It’s not x86. We are starting to see ARM playing an important role on the data side as well. On the compute side, even Nvidia GPUs still have PyTorch and a bunch of code that needs to run on a CPU. They embraced ARM. And on the data side, we are starting to see ARM making inroads. It already happened in the cloud. Look at Amazon’s Graviton 3 ARM-based processor. It is a serious threat to the x86 architecture. Google Cloud and Azure also embrace ARM. We haven’t seen ARM adoption in the enterprise. That’s starting to happen. The problem is not a matter of technology. It’s the software ecosystem. We play an important role in the AI ecosystem. These are all new world applications which are already getting ARM compatibility. We played an important role on the data side, because we need to crunch data at ridiculous speeds. …
Talk to AI people, and what’s the number one concern? Power. We need to conserve every bit of power so we can give back to the CPUs, GPUs, and accelerators. ARM is power efficient, but at the same time, it is showing significant performance promise. ARM is also getting ready for 400-gigabit RDMA and other technologies to help keep GPUs busy. The improvements we made to ARM are not just a matter of software porting. ARM has certain functions useful for high-end data processing. That capability was added to MinIO.
MinIO recently expanded its Intel relationship. What’s going on there?
Intel has the Intel Tiber AI Cloud. The company recognizes the importance of getting into the cloud ecosystem and also enabling a new generation of cloud applications to support the Intel architecture. That cloud is built around MinIO for the object store side. An enterprise can go to the Tiber Cloud and build AI applications and data applications on the cloud, and when they mature, they can take that blueprint and build it for themselves. That’s what Intel is enabling. Intel is not trying to compete with the cloud, but give enterprises a private cloud and the blueprint so they can build it for themselves. Their own cloud is built on MinIO for the data.
Are there any similar special relationships between MinIO and AMD or Nvidia?
Clearly, Nvidia is the GPU leader of the pack. They have a huge advantage over the competition. We see the close number two is AMD, and then Intel Gaudi 3, which is showing promise. I think in about 12 to 18 months, AMD will have their software ecosystem ready. What I’m hearing from customers, the AMD Instinct MI300X accelerators are getting ready. The software ecosystem is catching up.
It’s good for the overall industry to have multiple options. Competition drives innovation. It will push Nvidia to the next level. It’s not like Nvidia is showing any signs of slowdown. … Intel is on our board and is one of our investors. But there are no strings attached with Intel. We are very closely working with Nvidia and AMD.
What’s the competitive environment for MinIO?
It’s public cloud versus us. If you have lots of data, are you going to put it in the public cloud, or are you going to take control of your data and build a private cloud? The fight comes down to public cloud versus us. We are the private cloud for your AI data. The public cloud has an answer. Public cloud works today, scales today. The problem comes down to economics. And if businesses go to legacy storage players, the wheels will come off the cart. They are not ready to handle AI scale.
Is MinIO a profitable company?
We’re cash flow positive. We haven’t touched the last round of funding we raised.
How much money does MinIO have on hand if needed?
I can’t disclose it completely. But I can tell you, our last B round of funding raised $103 million. It was supposed to be $300 million. We didn’t need the money, so we only took $103 million out of that, and that is collecting interest. We haven’t even touched that interest.
So you don’t expect to need more investment?
Our board meetings are often about our investors getting stressed that we are not spending the money. It’s a good meeting to have. I would rather have this problem than them asking me to downsize and do layoffs. There is so much interest from our investors. They want to pour money into the company. I didn’t want to take that money until we see how we could invest. I don’t want to buy growth. I want to invest.
We see that AI is super-charging everything. The scale has grown multiple times. Enterprises are now talking about exascale like they talked about one petabyte or two petabytes in the past. Because the check sizes are larger and there is so much money in the market, budgets have come, and customers are like, ‘What can I do in the AI space to modernize the business?’ It’s an exciting time for us now. We see a clear path for us to become a household name in the enterprise. Think of AI data? Think of MinIO.
You will see major investments from us on the marketing side. The product is very mature. On the sales side, there is clear inbound traction. These are large customers coming inbound because they love the product. There was some adoption in those companies already. It became a critical part of their business, like how Red Hat grew inside the enterprise. So they come to us. Marketing is the biggest investment that we would make now.
Does MinIO primarily work directly with customers or through indirect channels?
We have been building for the last year a very robust channel partnership network. We have hired some top talent in that space. We go to market both with resellers and through distributors. We have a relationship with Carahsoft, for example. We have several relationships with the big systems integrators around the globe, and partner on major deals with them. We are growing our relationship with the likes of WWT, for example. So it’s a very broad play. And because of the inbound nature of the business, coupled with the kind of very horizontal nature of what object store is, we have a very large ecosystem through those resellers and distributors.
So channels are a huge part of our business. We’re looking to build relationships with those partners that have the biggest mind share in major enterprises. And we want to do that because that’s where a lot of these AI conversations are initiated.
Any acquisition strategy?
We actually create technologies. For instance, we built a very powerful, large-scale vector database by ourselves. Then we didn’t want to compete with our own partners, so we killed it. We said, ‘OK, we’ll win the AI storage market and leave the vector database to our partners.’ We also built MinIO SQL. It’s a very powerful log processing technology like Elastic Search and Splunk. We didn’t release that to market. We built a video search engine that we didn’t release to the market. Building tech is the easiest part of the business. Building a business around it is a whole different game. It’s very difficult. Go-to-market is always difficult. Why do we keep building these products? If we don’t build them, we will go insane, right? If we go into M&A, it will be more about who can accelerate us on the customer side.

Products Of The Year 2024: The Finalists

CRN staff compiled the top partner-friendly products that launched or were significantly updated over the last year. Now it’s up to solution providers to choose the winners.

Application Performance and Observability
As more applications run in hybrid-cloud and multi-cloud environments, maintaining application performance has becoming a more complex task. Application performance management and observability tools help IT organizations maintain the health, performance and user experience of business applications, according to market researcher Gartner. Such tools are used by IT operations managers, site reliability engineers, cloud and platform teams, application developers and software product owners.
Datadog Observability Platform
Dynatrace Unified Observability Platform
Grafana 11
IBM Instana Observability
New Relic Observability Platform
Splunk Observability Cloud

Artificial Intelligence: AI PCs
Everyday information workers and consumers are adopting the rapidly growing number of AI applications, copilots and other AI-driven software to improve their productivity and creativity. That’s fueling demand for personal computers with specialized processors, hardware and software to handle AI tasks. Global AI PC unit shipments are expected to exceed 43 million this year, according to market researcher Gartner, and soar to more than 114 million in 2025.
Acer TravelMate P4 14 (AMD)
Apple MacBook Pro (M3)
Dell Latitude 7455 (Qualcomm)
HP EliteBook 1040 G11 (Intel)
Lenovo ThinkPad 14S Gen 6 (Qualcomm)
Samsung Galaxy Book4 Pro (Intel)

Artificial Intelligence: Productivity Suites
Copilots, AI-powered assistants and other AI-based productivity software have become the most popular vehicle for users to tap into the power of artificial intelligence technology. These tools can help with everyday tasks including writing and editing documents, generating images, conducting research, automating repetitive tasks and more. AI productivity software, along with AI PCs, are the products that are bringing AI capabilities to the masses.
Microsoft Copilot
Google Gemini

Artificial Intelligence: Infrastructure
Businesses and organizations are rapidly expanding their use of AI. Building, deploying and running AI and machine learning applications, however, takes a lot of compute horsepower and the ability to process huge amounts of data. That’s boosting demand for powerful AI hardware in data centers and the cloud. Systems that support AI initiatives are expected to provide high levels of performance and scalability.
Dell PowerEdge R760xa
Lenovo ThinkSystem SR780a V3
Supermicro AS-4125GS-TNHR2-LCC

Big Data
Data volumes continue to explode and the global “datasphere” – the total amount of data created, captured, replicated and consumed – is growing more than 20 percent a year and is expected to reach approximately 291 zettabytes in 2027, according to market researcher IDC.
But wrangling all that data is a major challenge for businesses and that’s fueling demand for a range of big data tools to help businesses access, collect, manage, move, transform, govern and secure data.
Cloudera Open Data Lakehouse
Databricks Data Intelligence Platform
HPE Ezmeral Data Fabric Software
Microsoft Intelligent Data Platform
Snowflake Data Cloud
Starburst Galaxy

Business Applications
Business applications, including Enterprise Resource Planning (ERP) and financial management software, are the operational backbone for many businesses and organizations. ERP applications are the tools they use to automate and manage their business processes including accounting and finance, HR, supply chain and procurement, manufacturing, and more.
Epicor ERP
Oracle NetSuite
Microsoft Dynamics 365
Sage Intacct
SAP S/4HANA
Syspro ERP

Business Intelligence and Data Analytics
Many businesses and organizations are deriving huge value and competitive advantages from data generated by their own IT systems, collected through customer transactions and acquired from outside sources.
Businesses analyze data to gain insights about markets, their customers and their own operations. They are using the data to fuel digital transformation initiatives. They are even using it to support new data-intensive services or packaging it into data products.
Amazon Redshift
Domo Data Experience Platform
Google Cloud BigQuery
MicroStrategy ONE
Qlik Sense
Salesforce Tableau

Data Protection, Management and Resiliency
Data is the lifeblood of the modern enterprise. Data that’s lost or unavailable, either due to system failure, a disastrous event like a fire or earthquake, human error or a cyberattack, can cause major disruptions.
Data resilience and protection systems and services help businesses protect and maintain access to data, and identify, detect, respond and recover from data-destructive events.
Cohesity Data Cloud
Commvault Cloud Powered by Metallic AI
Dell PowerProtect Data Manager Appliance
HYCU R-Cloud
Rubrik Security Cloud
Veeam Data Platform

Edge Computing and Internet of Things
Efforts to bring applications and data processing closer to data sources is driving the proliferation of local edge servers and IoT devices. That, in turn, is driving demand for products to better manage and support increasingly distributed computing networks.
The global market for edge computing hardware, software and services is expected to grow at a CAGR of 15.7 percent to $111.3 billion by 2028, according to Markets and Markets.
Eaton iCube
HPE Edgeline
IBM Edge Application Manager
Red Hat Device Edge
Scale Computing Autonomous Infrastructure Management Engine (AIME)
Schneider Electric EcoStruxure Micro Data Center R-Series

Hybrid Cloud Infrastructure
Hybrid cloud infrastructure combines cloud-based (often Infrastructure-as-a-Service) resources with on-premises/private cloud IT systems, working together and sharing applications and data to provide businesses with the flexibility and scalability they need to support distributed business workloads and processes. A report from Allied Market Research says the global hybrid cloud market was $96.7 billion in 2023 and will reach $414.1 million by 2032.
Dell Technologies Apex Hybrid Cloud
HPE GreenLake
IBM Hybrid Cloud
NetApp Hybrid Cloud
Nutanix Hybrid Multicloud
VMware Cloud Foundation

MSP Platforms
Managed services have been one of the fastest growing segments of the IT channel as more businesses, organizations and government entities rely on MSPs to manage their IT infrastructure and end-user systems.
That’s boosting demand for MSP platforms, including the remote monitoring and management tools, professional services automation systems and other tools that MSPs rely on to do their jobs.
Atera
ConnectWise Asio Platform
HaloPSA
Kaseya 365
N-able Cloud Commander
Syncro Platform

Networking – Enterprise
Networking hardware, including routers, switches, hubs and bridges, have long been a mainstay of the channel. Today channel companies offer networking solutions and services that span data center and cloud networks, campus LAN and WAN, Network-as-a-Service, network management and automation, and network security systems.
Cisco Networking Cloud
HPE Aruba Networking Enterprise Private 5G
Juniper AI-Native Networking Platform
Nile NaaS
Prosimo AI Suite for Multi-Cloud Networking

Networking – Wireless
Wireless networks are key to making computing, communications and collaboration ubiquitous whether in the home, throughout offices and other workspaces, in manufacturing and industrial plants, and across large venues such as conference facilities and sports stadiums. Wi-Fi 7 is the seventh generation of wireless technology offering faster speeds and improved connectivity and capacity.
Extreme Networks AP5020 universal Wi-Fi 7 access point
Fortinet FortiAP 441K Wi-Fi 7 access point
HPE Aruba Networking Wi-Fi 7 access point
Zyxel Wi-Fi 7 access point

Power Protection and Management
Power protection and management systems and appliances are a critical component for protecting critical IT infrastructure and keeping data centers up and running in the event of extreme events. The product category includes technology for monitoring and managing power usage, protecting IT systems against electricity surges, and providing backup in the event of power failures.
CyberPower PFC Sinewave 1U UPS
Eaton 9PX 6kVA Lithium-Ion UPS
Schneider Electric Easy UPS 3-Phase 3M Advanced
Vertiv Liebert GXT5 Lithium-Ion UPS

Processors – CPUs
CPU semiconductors are the processing engines that power servers, laptop and desktop PCs, and mobile devices. Intel was long-dominant in the CPU space, but rival AMD has developed highly competitive products in recent years. Apple, meanwhile, has been developing its own “silicon” for its Mac, iPad and iPhone devices.
AMD Ryzen Pro 8040 Series
Intel Core Ultra Series
AmpereOne
Apple M3
Qualcomm Snapdragon X Elite

Processors – GPUs
Graphics processing units or GPUs are a specialized processor originally developed to accelerate the performance of computer graphics. But they are increasingly being designed into IT systems for high-performance computing tasks such as data science and AI applications. Nvidia has been a pioneer in developing GPUs for a broad range of applications, but rivals Intel and AMD have been expanding their GPU product portfolios.
AMD Instinct MI300X
Intel ARC A570M
Nvidia H200

Public Cloud Platforms
Public cloud platforms provide organizations with an alternative to building and managing their own IT systems and data centers. Public cloud operators also offer their own portfolios of cloud products and services such as application hosting, data storage and analytics. The value proposition is that cloud services reduce capital spending for businesses and provide more flexibility by allowing them to scale IT usage up or down as needed.
Amazon Web Services
CoreWeave Cloud
Google Cloud Platform
Microsoft Azure
Oracle Cloud Infrastructure (OCI)
Snowflake Data Cloud

SD-WAN
SD-WAN is a software-defined approach to managing and optimizing the performance and security of wide area networks that connect users to applications and cloud platforms. SD-WAN benefits include improved performance and connectivity, enhanced security, simplified management and lower operating costs.
Cisco Catalyst SD-WAN
Extreme Networks Extreme Cloud SD-WAN
Fortinet Secure SD-WAN
HPE Aruba EdgeConnect SD-WAN
Palo Alto Networks Prisma SD-WAN
Zscaler Zero Trust SD-WAN

Security – Cloud and Application Security
The rapid growth of cloud computing has created new security challenges for businesses and organizations as they adopt and utilize distributed IT infrastructure and applications that lie both within and outside of the corporate firewall. Cloud and application security technologies provide a range of capabilities including protection against internal and external threats, identity and access control, and network visibility and management.
CrowdStrike Falcon Cloud Security
F5 Distributed Cloud Services Web Application Scanning
Orca Cloud Security Platform
Palo Alto Networks Prisma Cloud
SentinelOne Singularity Cloud Security
Tenable Cloud Security
Wiz Cloud Security Platform

Security – Data
Protecting data has become a top priority for with the proliferation of ransomware attacks and other cybercrimes. The media is filled with headlines of businesses, hospitals, insurance companies, government entities and other organizations who find themselves blocked from accessing their own critical data or discover that their data has been stolen and is for sale on the Dark Web.
Data security tools provide a range of functions to accomplish their task including data encryption, user authentication and controlling access to data, monitoring data in real time to detect and respond to unusual activity, manage compliance with data governance requirements, and more.
ForcePoint ONE Data Security
IBM Guardium Data Protection
Proofpoint Information Protection
Rubrik Security Cloud
Wiz DSPM
Zscaler Data Protection

Security – Email and Web Security
Email and other internet communications are perhaps the most common vector for cybersecurity attacks including spam, phishing, malware delivery and system takeover.
Email security products, including antivirus/antimalware tools, spam filters, authentication and encryption systems, are a key component of a business’s overall IT security strategy. Web security tools help prevent attacks against websites.
Abnormal Security Platform
Akamai API Security
Barracuda Email Protection
Cloudflare Application Security
Mimecast Advanced Email Security
Proofpoint Threat Protection

Security – Endpoint Protection
Businesses can be most vulnerable through the endpoint devices (desktop PCs, laptops, smartphones) employees use for everyday work, along with embedded devices, IoT and other edge computing systems. This is especially true with today’s post-pandemic hybrid work practices where many of these devices now sit outside of corporate security perimeters.
Products in this technology category include antivirus and antimalware tools, endpoint protection platforms, and endpoint detection/response and extended detection/response software.
CrowdStrike Falcon Insight XDR
Huntress Managed EDR
SentinelOne Singularity XDR
Sophos Intercept X
ThreatLocker Protect
Trend Micro Trend Vision One

Security – Identity and Access Management
Businesses use identity and access management tools, backed by related policies and processes, to manage digital identities and control access to corporate IT systems and data. IAM tools, a foundational cybersecurity technology for zero trust IT initiatives, are key to identifying, authenticating and authorizing users – including employees and trusted business partners – while protecting against unauthorized access.
CyberArk Workforce Identity
Ping Identity PingOne for Workforce
Okta Workforce Identity Cloud
Microsoft Entra ID
OpenText NetIQ Identity Manager
SailPoint Identity Security Cloud

Security – Managed Detection and Response
Many businesses and organizations, especially SMBs, lack in-house cybersecurity expertise. Many turn to managed detection and response (MDR) providers for outsourced services that monitor clients’ IT systems, endpoints, networks and cloud environments on a 24/7 basis and respond to detected cyberthreats. MDR offerings generally combine cybersecurity teams, advanced threat detection tools and security operations center functions.
Arctic Wolf MDR
CrowdStrike Falcon Complete Next-Gen MDR
Huntress MDR for Microsoft 365
SentinelOne Singularity MDR
Sophos MDR
ThreatLocker Cyber Hero MDR

Security – Network
Businesses face multiple challenges to keep their network infrastructure secure and operational. Potential threats include distributed denial-of-service (DDoS) attacks, network-based ransomware, insider threats and password attacks, to name a few.
Securing corporate networks, meanwhile, has become all the harder with the move to remote work and the increasing use of cloud applications.
The specific technology components of a sound network security strategy include firewalls, SASE (secure access service edge) systems, network access control technology, antivirus and antimalware software, intrusion prevention systems, and tools for cloud, application and email security.
Cisco Hypershield
Fortinet FortiGate
SonicWall Cloud Secure Edge
Sophos XGS Firewall
ThreatLocker CyberHero MDR
WatchGuard ThreatSync+ NDR

Security – Security Operations Platform
Security Operations links security and IT operations teams to improve an organization’s cybersecurity posture across IT systems, networks and applications. SecOps software incorporates tools and processes to provide a unified approach to cybersecurity to help identify security threats and vulnerabilities, reduce risks and respond more quickly to security incidents.
Arctic Wolf Security Operations
CrowdStrike Falcon Next-Gen SIEM
Google Security Operations
Microsoft Sentinel
Palo Alto Networks Cortex XSIAM 2.0
Splunk Enterprise Security

Security – Security Access Service Edge
Security Access Service Edge platforms combine network and security services into a single cloud-based system – a critical concept for managing today’s multi-cloud environments and hybrid workforces. SASE can include multiple functions including zero-trust network access, secure web gateways, cloud access security brokers and firewall services to provide centralized control over identity and access policies and operations.
Cato SASE Cloud Platform
Cisco Secure Access
Fortinet FortiSASE
Netskope One SASE
Palo Alto Networks Prisma SASE
Zscaler Zero Trust SASE

Storage – Enterprise
Data volumes continue to explode and the global “datasphere” – the total amount of data created, captured, replicated and consumed – is growing more than 20 percent a year and is expected to reach approximately 291 zettabytes in 2027, according to market researcher IDC.
That data, of course, must be stored somewhere. While more data is being stored on cloud platforms, many businesses and organizations maintain on-premises data storage systems – either standalone or as part of a hybrid system – for a number of reasons including data security and governance and reduced internet costs.
Dell PowerStore
HPE Alletra Storage MP
Infinidat SSA Express
NetApp AFF C-Series
Pure Storage FlashArray//E
Quantum ActiveScale Z200 Object Storage

Storage – Software-Defined
Software-defined storage technology uncouples or abstracts storage management and provisioning from the underlying hardware. One benefit is that pools of physical storage resources can be managed as a single system, helping to reduce costs compared to traditional storage area network (SAN) and network-attached storage (NAS) systems.
DDN Infinia
Dell PowerFlex
HPE GreenLake for Block Storage
IBM Software-Defined Storage
Pure Storage Purity

Unified Communications and Collaboration
Unified communications (including Unified Communications as a Service) integrates VoIP, instant messaging, video conferencing and other communication capabilities through a single interface. UCC has taken on increased importance with more employees working from home and other remote locations.
UCC is a long-time channel mainstay with solution providers implementing, maintaining and operating UCC systems. The global UCC market is expected to reach $141.6 billion by 2027, according to Markets and Markets.
Cisco Webex
Intermedia Unite
Microsoft Teams
Nextiva Unified Customer Experience Platform
RingCentral RingCX

Outrun By Nvidia, Intel Pitches Gaudi 3 Chips For Cost-Effective AI Systems

In presentations and in interviews with CRN, Intel executives elaborate on the chipmaker’s strategy to market its Gaudi 3 accelerator chips to businesses who need cost-effective AI systems backed by open ecosystems after CEO Pat Gelsinger admitted that Intel won’t be ‘competing anytime soon for high-end training’ against rivals like Nvidia.

Intel said its strategy for Gaudi 3 accelerator chips will not focus on chasing the market for training massive AI models that has created seemingly unceasing demand for Nvidia’s GPUs, turned the rival into one of the world’s most valuable companies and led to a new class of expensive, energy-chugging data centers.

Instead, the semiconductor giant believes its Gaudi 3 chips will find traction with businesses who need cost-effective AI systems for training and, to a much greater extent, inferencing smaller, task-based models and open-source models.
[Related: Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown]
Intel outlined its strategy for Gaudi 3 when it announced last month that the accelerator chip, a key product in CEO Pat Gelsinger’s turnaround plan, will debut in servers from Dell Technologies and Supermicro in October. General availability is expected later in the fourth quarter, a delay from the third-quarter release window Intel gave in April.
Hewlett Packard Enterprise is expected to follow with its own Gaudi 3 system in December. System availability from other OEMs, including Lenovo, was not disclosed.
On the cloud front, Gaudi 3 will be available through services hosted on IBM Cloud early next year and sooner on Intel Tiber AI Cloud, the chipmaker’s recently rebranded cloud service that is meant to support commercial applications.
At a recent press event, Intel homed in on its competitive messaging around Gaudi 3, saying that it delivers a “price performance advantage,” particularly around inference, against Nvidia’s H100 GPU, which debuted in 2022, played a major role in Nvidia’s ascent as a data center vendor and was succeeded earlier this year by the memory-rich H200.
When it comes to an 8-billion-parameter Llama 3 model, Gaudi 3 is roughly 9 percent faster than the H100 but provides 80 percent better performance-per-dollar, according to calculations by Intel. For a 70-billion-parameter Llama 2 model, the chip is 19 percent faster but improves performance-per-dollar by roughly two times, the company said.
Intel has previously said that Gaudi 3’s power efficiency is on par with the H100 when it comes to inferencing large language models (LLMs) with an output of 128 tokens, but it has a performance-per-watt advantage when that output grows to 1,024 tokens. It has also said that Gaudi 3 has faster LLM inference throughput than the H200 with the same large token output. Tokens typically represent words or characters.
While Gaudi 3 was able to outperform the H100 and H200 on these two LLM inference throughput tests, the chip’s overall throughput for floating-point operations across 16-bit and 8-bit formats fell short of the H100’s capabilities.
For bfloat16 (FB16) and 8-bit floating-point precision matrix math, Gaudi 3 can perform 1,835 trillion floating point operations per second (TFLOPS) for each format while the H100 can reach 1,979 TFLOPS for BF16 and 3,958 TFLOPS for FP8.
But even if the chipmaker can claim any advantage over the H100 or H200, Intel has to contend with the fact that Nvidia has accelerated to an annual chip release cadence, and this means the rival plans to debut by the end of the year its next-generation Blackwell GPUs, for which Nvidia has promised will be more powerful and efficient.
Intel is also facing another rival that has become increasingly competitive in the AI computing space: AMD. The opposing chip designer last week said its forthcoming Instinct MI325X GPU can outperform Nvidia’s H200 on inference workloads and vowed that its next-generation MI350 chips will improve performance by magnitudes.

Why Intel Thinks It Can Find A Way Into The AI Chip Market

Knowing the battle ahead, Intel is not intending to go head-to-head with Nvidia’s GPUs in the race to enable the fastest AI systems for training massive AI models, like OpenAI’s 1.8 trillion-parameter GPT-4 Mixture-of-Experts model.
In an interview with CRN, Anil Nanduri, the head of Intel’s AI acceleration office, said purchasing decisions around infrastructure for training AI models so far have been mainly made based on performance and not cost.
That trend has largely benefited Nvidia so far, and it has allowed the company to build a groundswell of support among AI developers. In turn, developers have made significant investments in Nvidia’s full stack of technologies to build out their applications, raising the bar for moving development to another platform.
“And if you think in that context, there is an incumbent benefit, where all the frontier model research, all the capabilities are developed on the de facto platform where you’re building it, you’re researching it, and you’re, in essence, subconsciously optimizing it as well. And then to make that port over [to a different platform] is work,” Nanduri said.
It may make sense, at least for now, for hyperscalers like Meta and Microsoft to invest significant sums of money in ultrapowerful AI data center infrastructure to push cutting-edge capabilities without an immediate need to generate profits. OpenAI, for instance, is expected to generate $5 billion in losses—some of which is tied to services—this year on $3.6 billion in revenue, CNBC and other publications reported last month.
But many businesses cannot afford to make such investments and accept such losses. They also likely don’t need massive AI models that can answer questions on topics that go far beyond their focus areas, according to Nanduri.
“The world we are starting to see is people are questioning the [return on investment], the cost, the power and everything else. This is where—I don’t have a crystal ball—but the way we think about it is, do you want one giant model that knows it all?” Nanduri said.
Intel believes the answer is “no” for many businesses and that they will instead opt for smaller, task-based models that have lighter performance needs.
Nanduri said while Gaudi 3 is “not catching up” to Nvidia’s latest GPU from a head-to-head performance perspective, the accelerator chip is well-suited to enable economical systems for running task-based models and open-source models on behalf of enterprises, which is where the company has “traditional strengths.”
“For the enterprises where we have a lot of strong relationships, they’re not the first rapid adopters of AI. They’re actually very thoughtful about how they’re going to deploy it. So I think that’s what’s driving us to this assessment of what is the product market fit and to our customer base, where we traditionally have strong relationships,” he said.
Justin Hotard, an HPE veteran who became leader of Intel’s Data Center and AI Group at the beginning of the year, said he and other leaders settled on this strategy after hearing from enterprise customers who want more economical AI systems, which has helped inform Intel’s belief that there could be a big market for such products.
“We feel like where we are with the product, the customers that are engaged, the problems we’re solving, that’s our swim lane. The bet is that the market will open up in that space, and there’ll be a bunch of people building their own inferencing solutions,” he said in a response to a CRN question at the press event.
At a financial conference in August, Gelsinger admitted that the company isn’t going to be “competing anytime soon for high-end training” because its competitors are “so far ahead,” so it’s betting on AI deployments with enterprises and at the edge.
“Today, 70 percent of computing is done in the cloud. 80-plus percent of data remains on-prem or in control of the enterprise. That’s a pretty stark contrast when you think about it. So the mission-critical business data is over here, and all of the enthusiasm on AI is over here. And I will argue that that data in the last 25 years of cloud hasn’t moved to the cloud, and I don’t think it’s going to move to the cloud,” he said at the Deutsche Bank analyst conference.

Intel Bets On Open Ecosystem Approach

Intel also hopes to win over customers with Gaudi 3 by embracing an open ecosystem approach across hardware infrastructure, software platforms and applications, which executives said contrasts with Nvidia’s “walled garden” strategy.
Saurabh Kulkarni (pictured), vice president of product management in Intel’s Data Center and AI Group, said customers and partners will have the choice to scale Gaudi 3 from one system with eight accelerator chips, all the way to a 1,024-node cluster with over 8,000 chips, with several configuration options in between, all meant for different levels of performance.
To enable the hardware ecosystem, Intel is replicating its Xeon playbook by providing OEMs with reference architectures and designs, which “can then be used as blueprints for our customers to replicate and build infrastructure in a modular fashion,” he said.
These reference architectures will be backed by a variety of open standards, ranging from Ethernet and PCIe for connectivity to DAOS for distributed storage and SYCL for programming, which Intel said helps prevent vendor lock-in.
When it comes to software, Intel executive Bill Pearson said the company’s open approach means that partners and customers can choose from a variety of tools from different vendors to address every software need for an AI system. He contrasted this with Nvidia’s approach, which has been to create many of its own tools that only work with Nvidia GPUs.
“Rather than us creating all the tools that a customer or developer might need, we rely on our ecosystem partners to do that. We work with them, and we help the customers then choose the one that makes sense for their particular enterprise,” said Pearson, who is vice president of software in the Data Center and AI Group.
A key aspect of this open ecosystem software approach is the Open Platform for Enterprise AI (OPEA), a group started earlier this year under the Linux Foundation that is meant to serve as the foundation for microservices that can be used for AI systems. Members of the group range from chip companies like AMD, Intel and Rivos, to a wide variety of software providers, including virtualization providers like VMware and Red Hat as well as AI and machine learning platforms such as Domino, Clarifai and Intel-backed Articul8.
“When we look at how to implement a solution leveraging those microservices, every component of the stack has multiple offers, and so you need to be very specific about what’s going to work best for you. Is there a preference that you have? Is it a purchasing agreement? Is it a technical preference? Is there a relationship preference?” said Pearson.
“And then customers can choose the pieces, the components, the ingredients that are going to make sense for their business. To me, that’s one of the best things about our open ecosystem, is that we don’t hand you the answer. Instead, we give you the tools to go and select the best answer,” he added.
Key to Intel’s software approach for AI systems is a focus on retrieval-augmented generation (RAG), which allows LLMs to perform queries against proprietary enterprise data without creating the need to fine-tune or re-train those models.
“This finally enables organizations to customize and launch GenAI applications more quickly and more cost effectively,” Pearson said.
To help customers set up RAG-based AI applications, Intel plans to introduce later this year Intel AI for Enterprise RAG, a catalog of solutions developed by Intel and third parties that is set to debut before the end of the year. These solutions address use cases ranging from code generation and code translation, to content summarization and question-and-answer.
Pearson said Intel is “uniquely positioned” to address challenges faced by businesses in deploying RAG-based AI infrastructure with technologies developed by Intel and partners, which start with validated servers equipped with Gaudi and Xeon chips from OEMs and includes software optimizations, vector databases and embedding models, management and orchestration software, OPEA microservices, and RAG software.
“All of this makes it easy for enterprise customers to implement solutions based on Intel AI for Enterprise RAG,” he said.

Channel Will Be ‘Key’ For Gaudi 3 Rollout

In an interview with CRN last week, Greg Ernst, corporate vice president and general manager of Intel’s Americas sales organization and global accounts, said channel partners will be critical to getting Gaudi 3-based systems in the hands of customers.
For Intel to get to this point, Ernst said the chipmaker needed Gaudi 3 to reach a broad range of support from server vendors that “partners like World Wide Technology can really rally around.” He added that Intel has “done a lot of learning with the partners of how to sell the product and implement product support.”
“Now we’re ready for scale, and the partners are going to be key for that,” he said.
Rohit Badlaney, general manager of IBM Cloud product and industry platforms, told CRN that the company’s “build” independent software vendor (ISV) partners, value-added distributors and global systems integrators are three major ways IBM plans to sell cloud services based on Gaudi 3, which will largely be focused around its watsonx AI platform.
“We’ve got a whole sales ecosystem team that’s going to focus on build ISVs, both embedding and building with our watsonx platform, the same kind of efforts going on now with our Red Hat developer stack,” he said at Intel’s press event last month.
Badlaney said IBM Cloud has tested Intel’s “price performance advantage” claims for Gaudi 3 and is impressed with what they have found.
“As we look at the capability in Gaudi 3, specifically for our watsonx data and AI platform, it really differentiated in our testing from a cost-performance perspective. So the first set of use cases that we’ll apply it to is inferencing around our own branded models and some of the other models that we see,” he said.
Vivek Mohindra, senior vice president of corporate strategy at Dell, said by adopting Gaudi 3 into its PowerEdge XE9680 portfolio, his company is giving partners and customers an alternative to systems with accelerator chips from Intel’s rivals. He added that Dell’s Omnia software for managing high-performance computing and AI workloads works well with the OPEA microservices, giving enterprises an “easy button” to deploy new infrastructure.
“It gives customers a choice as well, and then on software, with our Omnia stack being interoperable with [Intel’s] OPEA, that provides for an immense ability for the customers to adopt and scale it relatively easily,” he said at Intel’s press event.
Alexey Stolyar, CTO of Northbrook, Ill.-based systems integrator International Computer Concepts, told CRN that his company started taking high-level training courses around Gaudi 3 and that he can see the need for cost-effective AI systems enabled by such chips, mainly because of how much power it takes to train or fine-tune massive models.
“What you’re going to find is that a lot of the world is going to focus on smaller, more efficient, more precise models than these huge ones. The huge ones are good at general tasks, but they’re not good at very specific tasks. The enterprises are going to start developing either their own models or fine-tune specific open-source models, but they’re going to be smaller and they’re going to be more efficient,” he said.
Stolyar said while International Computer Concepts hasn’t started talking to customers proactively about Gaudi 3 systems, one customer has already approached his company developing a Gaudi 3 system for a turnkey appliance the customer plans to sell for specific workloads because of benchmark where the chip has shown to perform well.
However, the solution provider executive said he isn’t sure how big of an opportunity Gaudi 3 represents yet and added that Intel’s success will heavily rely on how easy Gaudi 3 systems are to use in relation to those powered by Nvidia chips and software.
“I think customers want alternatives. I think having good competition is good, but it’s not going to happen until that ease of use is there. Nvidia has been doing this for a while. They’ve been fine-tuning their software packages and so on for a long time in that ecosystem,” he said.
A senior leader at a solution provider told CRN that his company’s conversations with Intel representatives have given him the impression that the chipmaker isn’t seeking to take Nvidia head-on with Gaudi 3 but is instead hoping to win a “percentage” of the AI market.
“They’ve been talking about Gaudi 3 for a long time: ‘Hey, this is going to be the thing for us. We’re going to compete.’ But then I think they’re also sort of coming in with tempered expectations of like, ‘Hey, let’s compete in a percentage of this market. We’re not going to take on Nvidia, per se, head-to-head, but we can chew away at some of this and give customers options. Let’s pick out five customers and go talk to them,” said the executive, who asked to not be identified to speak frankly about his work with Intel.
The solution provider leader said he does think there could be a market for cost-effective AI systems like the ones that are powered by Gaudi 3 because he has heard from customers who are becoming more conscious about high AI infrastructure costs.
“In some ways, you’re conceding that somebody else has already won when you take that approach, but it’s also pretty logical to say, ‘Hey, if it does all these things, you would be a fool not to look at it, because it’ll save you money and power and everything else.’ But that’s not a take-over-the-world type of strategy,” he said.

Cisco CEO On Besting HPE-Juniper, Splunk Integration And AI Future

Cisco CEO Chuck Robbins talks about having a leg-up over HPE-Juniper, cross-selling Splunk opportunities, Cisco’s bullish AI strategy and his thoughts on the U.S. economy in 2025 with the upcoming presidential election.

Cisco CEO Chuck Robbins is confident that his company’s longtime networking leadership will continue regardless of new competition from HPE with its pending Juniper Networks acquisition.
“You look at the combination of networking and security and the importance of those two coming together—which they (HPE) do not have—and you look at data center infrastructure, you look at campus networking with wireless, with all of the observability, the security and everything that we have—I mean, we have more technology that brings more value to our customers in the infrastructure layer than anybody else,” said Cisco’s Robbins on stage at the 2024 XChange Best of Breed Conference today in Atlanta.
HPE CEO Antonio Neri said at the conference this week that with HPE’s upcoming $14 billion acquisition of Juniper that “we are becoming a networking company at the core. Something that probably Cisco has forgotten now for a little bit.”
When asked for a response, Robbins said with a smirk: “Well, we forgot more about networking than they’ll probably know about networking.”
[Related: Cisco CEO Chuck Robbins: Moving Fast To Win The AI Battle]
Robbins also spoke about Cisco’s $28 billion blockbuster acquisition of Splunk in terms of its channel partner and integration strategy.
“Well, when you spend $28 billion dollars, you want to be a little careful that you don’t screw something up,” Robbins said. “So we are being thoughtful about it. You have to get through the core fundamentals. It’s like playing sports: you got to get all the foundational and the fundamentals right before you start doing a lot of fancy things. So what we implemented this year is we have a bonus program in place for the Cisco sales force to simply open the door and introduce the Splunk sales team to the CIO of any customers that aren’t using Splunk.”
Robbins also chatted about the upcoming 2024 U.S. presidential election and the potential impact on the economy.
“I think our system is built in a way that restricts radical policy shifts,” said Robbins.
In an interview at XChange with CRN’s Jennifer Follett and Steven Burke, Robbins talks about Cisco networking besting HPE-Juniper, Cisco’s AI strategy, the Cisco-Splunk strategy and AI’s future.

What is the biggest difference between the Cisco-Splunk networking story and the HPE-Juniper networking story?
I did read the article. Look, the difference is we didn’t need [to buy] a networking company, but what our customers are looking for is they’re trying to make sense of all the data they have.
And given the footprint that we have and the platform that Splunk has, if you think about all the insights that can be delivered from networking devices, all the threat intelligence that we get from Talos, all the intelligence we get from technologies like from ThousandEyes—and if you put all that together with all the log data and everything else that Splunk currently sees, then you apply a layer of AI on top of it—we think we can give our customers greater insights about what’s happening in their infrastructure, what’s happening with their applications, what’s happening in their security, and the security of their infrastructure.
We think we can do that more effectively for our customers better than anybody else. That’s what it really brings to us.
[Robbins’ answer continues into next slide.]

What is the biggest difference between the Cisco Splunk networking story and the HPE Juniper networking story?
[Robbins’ answer continues from previous slide.]
So Splunk really brings all of those insights to life. If you contemplate the use of AI by the bad actors around the world, from a security perspective, if we aren’t as good or better at leveraging AI on the most comprehensive data set that you can possibly have of threat intelligence that’s going on—and we see billions of threats every day—if you’re not using AI to actually correlate all these disparate things that are happening in your infrastructure, then you’re going to lose.
As we always say, it’s clichéd but they only have to be right once.
So I think all of that together makes Splunk a great deal for us.
So far, the integration is going incredibly well. We’ve integrated our XDR platform with SIEM (security information and event management) already. We actually are filtering high fidelity alerts out of our Talos threat intelligence and putting that in Splunk already.
There’s some stuff we’re going to talk about at our partner summit in a couple weeks, and we’re just going to keep rolling with the innovation. So it’s good.

HPE CEO Antonio Neri mentioned when he was here that he believes Cisco may have forgotten that it’s a networking company at heart. What is your response to critics who say you’ve taken your eye off the networking ball?
Well, we forgot more about networking than they’ll probably know about networking.
I thought it was interesting that [HPE CEO Neri said] Juniper excelled into campus, and they have 2.6 percent market share in campus. HPE actually has more.
So I guess they think more of Juniper campus portfolio than they do their own.
But look, we have the most comprehensive portfolio, whether you’re looking at cloud infrastructure today, AI infrastructure under training models, the technology that we’re going to deliver for an end-to-end stack for how our enterprise customers are going to deploy AI applications with HyperFabric.
You look at the combination of networking and security and the importance of those two coming together—which they (HPE) do not have—and you look at data center infrastructure, you look at campus networking, with wireless, with all of the observability, the security and everything that we have—I mean, we have more technology that brings more value to our customers in the infrastructure layer than anybody else.

Cisco’s gone through so many evolutions as a company. So what is the elevator pitch of what the Cisco identity is today?
If you really think about it, we are the only company who can bring networking, observability, security to our customers—all integrated together.
We think that all of those things coming together are more important than they ever have been, and that’s what we’re going to deliver: a secure networking company that actually delivers incredible capabilities—whether it’s a AI ready data center, future proof workplace, or an underlying layer of digital resilience that we’re going to deliver.
With the Splunk acquisition, you brought in former Splunk CEO Gary Steele. What changes are you looking for him to make, and what impact do you want him to have on the combined Cisco-Splunk sales and channel strategy?
Gary’s great. Gary is an operator. He did a phenomenal job of running Splunk for the two years before we actually completed the acquisition. He’s done an amazing job.
He really excels at simplifying things. So our partners will appreciate that.
The teams are working on evolving our partner program right now, which we’re going to talk about at Cisco Partner Summit in a couple of weeks.
He’s looking at simplifying our coverage models, simplifying the engagement model and really trying to help our teams just spend more time actually with partners and customers on a monthly basis. That’s what he’s focused on. I think he’s going to make a ton of progress this year.

What does Splunk do to the overall culture of Cisco? Is there some sort of ‘Splunk-ification’ of Cisco going on as more Splunk folks come into leadership positions?
I don’t think there’s a ‘Splunk-ification.’ That’s a good word though.
What you have is the same thing you have with any leader who comes in and is looking at anything through a fresh set of eyes. (Splunk’s CEO Steele) asked questions like, ‘Why do we do that?’ Which is always healthy.
There’s a beautiful six months of ignorance when you take a new job, because you can ask any question you want and he’s done that. He’s challenged us in some areas. He’s asked the right questions about why do we do this?
He’s actually got an incredible balance.
You would think that he would come in and want to push the Splunk portfolio across the entire sales force, and he reminds us quite regularly, ‘Hey, just remember, we need these salespeople selling lots of network infrastructure. So let’s not distract them. Let’s stay focused.’ So the biggest thing that he does, though, is he brings a fresh set of eyes.

What’s your take on the cross-sell opportunity between Cisco and Splunk?
Well, when you spend $28 billion dollars, you want to be a little careful that you don’t screw something up.
So we are being thoughtful about it. You have to get through the core fundamentals. It’s like playing sports: you got to get all the foundational and the fundamentals right before you start doing a lot of fancy things.
So what we implemented this year is we have a bonus program in place for the Cisco salesforce to simply open the door and introduce the Splunk sales team to the CIO of any customers that aren’t using Splunk. So just foundational, basic stuff.
At the same time, we’re getting all their employees into our systems and all those kinds of things, and doing all the hard work of doing a big integration that’s going on at the same time. So we’re going to move as fast as we possibly can, but not faster than we’re comfortable with.

From the partner perspective of not only the Cisco side or the Splunk side, but those who aren’t working with you at all yet and are now looking at this new Cisco-Splunk combination. What is the order of magnitude of opportunity here for those partners?
I would highly encourage all of you to look at Splunk and the security integration that we’re doing. The next generation SOC and the opportunity that our customers are obviously moving towards, I mean—security is a massive data problem, and to the extent that we can help them actually leverage AI, leverage automation, to actually correlate these threats and get to the root of the problem faster, then they’re going to be more effective.
First of all, Splunk is highly concentrated in the upper end of the account base. They’re highly concentrated in the United States. So there’s a huge opportunity for us to expand.
They didn’t have a super robust partner program. They had a partner strategy and they worked with lots of partners, but it wasn’t quite as comprehensive.
This business is the services around it and helping customers get these SOCs built and get it up and running. So it’s quite good for the partners. As we continue to drive the integration across the security portfolio into Splunk, you’ll just see more and more opportunity as this evolves. And then the observability side of it as well.

Where are you going to make the investments to help partners cross the AI chasm and really deliver on this vision of security, observability and networking?
So you’re going to see this begin to roll out at the partner summit. What we have coming up from an AI perspective. We have our WebExOne event coming up. We got our partner summit. We have Cisco Live Asia. We have an AI event we’re doing in January. Then we have Cisco Live Europe and we have Cisco Live in the U.S.
We have this roadmap of product announcements across this space that are all going to be around AI that are leading with infrastructure, both compute and networking, and security. We’re obviously building in a lot of AI capabilities into our portfolio. We’re going to lay out the whole architecture at the partner summit a couple weeks.
You’ll see our incentive structure shift to align with that. Because it’s super important right now—and this is something else Gary (Steele) believes in deeply—the incentives we put in place for our partners and our sales teams—we want them to stay aligned. When they’re aligned, we do really great things together. So we’ll be focused on making sure that’s true.

Cisco is really well prepared for this AI transition. Why is that and what do you see there that’s critical to partners?
There’s a multitude of areas that we’re focused on in the AI space.
First of all, cloud infrastructure underneath the training model. So we’re deployed in three of the four as Ethernet underneath the GPU for these training clusters. So we’re doing that first and foremost. So that says we’re on the front end of this whole trend.
The second thing is infrastructure in the enterprise. We announced HyperFabric a few months back and it’s going to be available the first of year. We’re actually working with Nvidia on some features that they’re still working on for us to get that delivered. That’s going to be Cisco networking, Vast storage, Nvidia GPUs, Cisco CPUs, with a cloud-based orchestration and lifecycle management capabilities with security integrated as well. So that’s happening in the enterprise.
We want to really help our customers more easily deploy these AI applications as the use cases become more apparent. We’re focused on security. We’ve launched Hypershield. We’re working on some other technology that will actually provide a layer of security in front of these custom models that our enterprise customers may run. So there’s a whole security aspect that’s going on.
We need security for AI and AI for security, and we’re working on both of those.
We’re going to have a whole services set of capabilities to help our customers think through use cases with our partners, in conjunction with our partners as we go forward.
We understand our role in the web scale space. We understand our role in the enterprise. We understand our role in the security side of AI. And we understand our role in helping our enterprise customers understand how to deploy AI.

A great unknown for what the economy is going to look like in 2025? With this election coming up, we are going to have a new president in January, one way or the other. What are your thoughts on what the impact is going to be on the economy based on whoever gets elected?
Look, we’ve had both parties in office over the last eight years and the economy has been incredibly resilient. So I think our system is built in a way that restricts radical policy shifts.
I think that there’s so much happening right now that’s positive in the U.S. that the policies and the approach will be different depending on which administration’s in office. But the last eight years has proven that we have a very— especially in light of the inflation that we dealt with, the supply chain shocks that we dealt with, the interest rate increases, the subsequent beginning of the easing—I think we’ve just proven that we have a very resilient economy.
So I think this is going to continue to chug along. We’ll just have to deal with different issues, depending on who’s in office relative to different policy issues and how we respond to them.