Gelsinger’s sudden exit from Intel raises doubts about its strategy

In interviews with CRN on Monday, executives from U.S. solutions providers that partner with Intel expressed concern about the company’s future in light of Pat Gelsinger’s sudden exit and said it needs to focus on its traditional CPUs and nascent Goudy accelerator chips. Should focus on businesses.

Intel Channel Partners said the sudden departure of CEO Pat Gelsinger raises questions about the company’s current strategy, which has placed additional emphasis on expanding its manufacturing capabilities on top of its chip design business.
In interviews with CRN on Monday, executives at U.S. solutions providers that partner with Intel expressed concerns about the company’s future. Gelsinger’s exit And said it should focus on its traditional business, which includes Core CPUs for PCs and Xeon CPUs for servers, in addition to its new Goudy product line for AI servers.
[Related: Intel: Partners Will Play ‘Massive Role’ In 2025 Gaudi 3 AI Chip Rollout]
Eric Stromquist, president of Beaverton, Ore.-based Chrome device maker CTL, said Gelsinger’s sudden retirement, which the company announced Monday morning, has created uncertainty about Intel’s current strategy as the chipmaker veteran tries to build and champion it. How integral I was. ,
“Obviously this is not good news, because that was the pat strategy,” he said.
multiple reports including one bloombergsaid Intel’s board of directors gave Gelsinger the option of resigning or being fired after losing confidence in its turnaround plan, in which the company spent tens of millions of dollars expanding its manufacturing footprint as well as its advanced chip-manufacturing capabilities. Billions of dollars spent. ,
One ray of hope for Stromquist comes from the news that Michelle Johnston Holthaus, an experienced sales and channel executive who most recently led Intel’s Client Computing Group, will serve in the newly created role as CEO of Intel Products, Which includes client business. Data Center & AI Group and Network & Edge Group.
Holthaus was also named interim co-CEO along with Intel CFO David Zinsner.
“Michelle is an experienced leader. I have known her for a long time, so I think she is a good manager of the Intel business, [and] He is a good choice to co-lead the company. but obviously [Gelsinger’s sudden exit] Is anyone saying that the previous strategy is in doubt,” Stromqvist said.
In Intel’s announcement about Gelsinger’s retirement, longtime Intel board director Frank Yeary, who now serves as its interim executive chairman, said the chip maker needed to “reinvigorate our product group in everything we do.” Should be kept at its center.”
“Our customers demand it from us and we will deliver,” he said in a statement. He said Intel will continue to advance its manufacturing and foundry capabilities.
According to Stromquist, Intel should now focus on “getting back to the basics of working with them.” [partners] And coming up with great solutions.”
“Perhaps Intel’s historic go-to-market business needs to be questioned. I don’t know. But I always encourage them to do what they do best, which is build a great channel, create great products and the business will take care of itself,” he said.

Gelsinger was credited with righting ‘a very big ship’

An official at a US Intel distribution partner said that even in the absence of news reports explaining why Gelsinger retired on Sunday, the timing and nature of the exit announcement made it likely that the CEO’s departure was not voluntary.
“Normally, when someone retires from Intel, it’s a very long process,” said the distribution executive, who spoke on the condition of anonymity to speak candidly. “They announce they are retiring. They usually stay for a transition period and do all those types of things over a very long and drawn-out period, not just ‘I’m retiring, and today is my last day.'”
Intel’s board of directors is now looking for a permanent successor to Gelsinger, with the distribution executive saying he expects whoever takes over will make executing on Intel’s chip road map a big priority.
He said Gelsinger, who worked at Intel for 30 years before his three-year tenure as Intel CEO, should get a lot of credit for what he can do to improve Intel’s situation. , which was in a shaky state before the arrival of the company stalwart. Above.
“I think Pat did a lot of things to steer a very big ship in the right direction. I don’t think there is anyone [U.S.] The Chips Act Without Pat Gelsinger. I don’t think the focus on American semiconductor manufacturing can be maintained without them,” he said.
Like Stromquist, the distribution executive was pleased to see Holthaus take advanced leadership roles within the company.
“She is someone who knows Intel very well. She knows all the business of the channel very well. She knows all of Intel’s business areas, so I think she has a really good, broad understanding of Intel’s different customer bases, which is important for us as a distributor,” he said.

Partner: Gelsinger was ‘scapegoat’ for Wall Street

Randy Copeland, president and CEO of Richmond, Virginia-based Velocity Micro, said that while he did not find the news of Gelsinger’s sudden exit surprising, he did not think it was “the wisest decision” given how much change Intel has made. over the past three years to execute on Gelsinger’s expensive and ambitious IDM 2.0 strategy.
“They’re going to have a new CEO with a brand new vision and a brand new road map, and I think that’s going to put them in some more trouble. Pat’s plan was extremely aggressive, but they’ve been in it so far, now I’m worried about what’s going to happen next. Are they going to turn around again and go in the other direction? I feel like they’re not accomplishing anything,” he said.
For Copeland, the board should have been better prepared to face any challenges that came with Gelsinger’s turnaround plan, including the revival of the company’s contract manufacturing business under the Intel Foundry name.
“This was a very long-term plan, and I think before the board approved it, they should have thought about what would happen next. [few] The years were going to look like that, because they were obviously deep in the middle of it,” he said.
While Copeland said Intel is “still years away from catching up” with Taiwanese foundry giant TSMC, he believes a bet on contract manufacturing could still work.
He said, “If they can come up with a real US-based competitor in a few years, I’m pretty confident that Intel is going to be a formidable company.”
Despite feeling uncertain about Intel’s future, Copeland does not think the company faces any existential threat, even though he does not think its problems will end with Gelsinger.
Copeland said, “I don’t think they’re going to completely explode or anything, but I do think they’re using Pat as a scapegoat for a company-wide strategy that’s not going as fast. “As much as Wall Street wants.”

Partner sees conflict with Goudy, Xeon chips

Burnsville, Minn. Dominic Denninger, vice president of engineering at Nor-Tech, Inc., said he was surprised to hear the news about Gelsinger.
“I thought he had only partially completed the work, but apparently there was disagreement on the board. When there’s not integration, it’s not good,” he said.
While Intel has made progress in expanding its manufacturing footprint, introducing advanced chipmaking nodes at accelerated speeds and naming some major customers for Intel Foundry, those projects have yet to boost its financial position. Is.
“I think it’s taken longer than anyone anticipated,” Denninger said.
The Nor-Tech executive said Intel has also struggled to introduce a formative accelerator chip that can rival Nvidia’s GPUs in AI computing. While Intel’s new Goudy 3 chip has received support from several large server vendors, including Dell Technologies and Hewlett Packard Enterprise, Gelsinger said in October that the Goudy product line is expected being less of its modest $500 million revenue target for 2024.
“If you go back [and look at] several attempts to go after that [market]Just none of them have been successful,” Denninger said.
One bright spot for Denninger is Intel’s recent launch Xeon 6 Server ProcessorBut still, he said, those products have received a slow response among their customers, many of whom are in the high-performance computing sector.
“Previous generations of Zions have been very reliable [with] Pretty good performance. And all the reports I’m seeing on the Zeon 6 are good. “We are not calling a lot of people here to test them,” he said.
To Danninger, Gelsinger’s sudden exit suggests that big changes are about to happen.
“They’re going to have to make some big changes in strategy and think seriously about some things here and there that’s probably a lot of consolidation within corporate intel,” he said.

Partners see potential for AI sales growth with CEO shakeup

Two solution provider executives said they hope Gelsinger’s sudden exit could allow the chip maker to place more emphasis on growing AI sales with enterprises.
“I think this could strengthen Intel’s position in the enterprise AI solutions market,” said Patrick Shelley, CTO of Montvale, NJ-based PKA Technologies, which is ranked No. 438 in CRN’s 2024 list. Solution Provider 500 List,
In September, Intel outline your strategy Around Gaudi 3, which is based on the idea that enterprises will look for Gaudi chips for AI systems that are more cost-effective than Nvidia GPU-based systems for running small AI models.
“Intel is already doing some really innovative things in the AI ​​market, especially with Gaudi 3, which is a very cost-competitive AI solution for the enterprise. We believe this is going to make AI more affordable for customers, especially in SLED (State Local and Education). I see this sparking more innovation for Intel in the enterprise AI market, Shelley said.
Bob Venero, CEO of Fort Lauderdale, Florida-based Future Tech Enterprise, No. 76 on CRN’s 2024 SP500, said he’s confident Intel will remain a major player in the AI ​​era, especially when it comes to PCs. market.
“I see Intel continuing to grow its AI business, especially as the PC remains an important part of the ecosystem that is connected to AI,” he said. “The only way you access AI systems is through your PC and the best way to run a PC is on Intel chips. Pat’s retirement won’t change that. “It just means there is going to be a new CEO who will lead Intel into the AI ​​future.”
Venero said he would like to see Intel double down on product innovation with major OEMs like HP Inc., Dell and Lenovo.
“Those are the companies that are driving the major investments in R&D around the PC product set,” he said. “This is where Intel needs to invest in differentiated products that can help advance the journey in AI and beyond.”
Even after the changing of the guard, Intel remains one of the strongest technology product companies in the business, according to Venero.
“Look at the long line of x86 products that have driven decades of productivity and growth for the channel. Neither channel would be possible without Intel. Think about who is running all the PCs, servers, and chipsets on storage platforms: It’s Intel, Intel, Intel. There are other players out there but no one has the depth and breadth of the product line that Intel offers.”
Venero said he expected the new leadership and Gelsinger’s eventual replacement to remain strong supporters of the channel.
“Intel has always been a channel-first company,” he said. “The new executives clearly know the value of the channel. I doubt they will make any major changes as it relates to supporting their channel partners.”
Additional reporting by Steven Burke.

CRN’s 2024 Products Of The Year

CRN staff compiled the top partner-friendly products that launched or were significantly enhanced over the past year and then turned to solution providers to choose this year’s winners.

And The Trophies Go To…
The CRN 2024 Products of the Year awards honor the leading partner-friendly IT products as selected by the solution providers that develop solutions and services around these products and bring them to their customers.
CRN editors selected finalists in 30 technology categories from products that were newly launched or updated between September 2023 and September 2024. The categories range from mainstay channel products in enterprise networking, enterprise storage and SD-WAN to products in newer technology areas such as application performance/observability, artificial intelligence architecture and AI PCs.
We then asked solution providers to rate the products based on three subcategories: technology, revenue and profit, and customer need. Products with the highest overall score (the average of the three subcategory scores) in each product category was named the winner.
What follows are the winners, subcategory winners and finalists for 2024.

Application Performance and Observability
Winner Overall: IBM Instana Observability
IBM Instana Observability automatically discovers, maps and monitors all services and infrastructure components, providing complete visibility across an application stack. It continuously captures every trace, detects changes in real-time, and provides detailed insights to automate root cause detection and problem resolution. Instana’s approach to observability includes built-in automation, application and infrastructure context, and AI-powered intelligent actions.
IBM Instana Observability scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Dynatrace Unified Observability Platform
Finalist: Datadog Observability Platform
Finalist: Grafana 11
Finalist: New Relic Observability Platform
Finalist: Splunk Observability Cloud

Artificial Intelligence: AI PCs
Winner Overall: Acer TravelMate P4 14
The Acer TravelMate P4 14 laptop for business professionals is an AI-ready business laptop with Microsoft Copilot, TravelMate Sense, and PurifiedVoice with AI Noise Reduction. The product harnesses the power of the Intel Core processor with built-in vPro Essentials hardware security and Intel Unison phone integration capabilities.
The Acer TravelMate P4 14 scored highest overall in this product category and highest for technology and for revenue and profit.
Subcategory Winner – Customer Need: HP EliteBook 1040 G11
Finalist: Apple MacBook Pro
Finalist: Dell Latitude 7455
Finalist: Lenovo ThinkPad 14S Gen 6
Finalist: Samsung Galaxy Book4 Pro

Artificial Intelligence: Infrastructure
Winner Overall: Supermicro AS-4125GS-TNHR2-LCC
The Supermicro AS-4125GS-TNHR2-LCC is designed for large-scale and cloud-scale compute tasks in AI, high performance computing, AI/deep learning, and deep learning/AI/ML development. The rackmount, liquid-cooled server runs on Nvidia H100 GPU processors.
The Supermicro AS-4125GS-TNHR2-LCC scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Dell PowerEdge R760xa
Finalist: Lenovo ThinkSystem SR780a V3

Artificial Intelligence: Productivity Suites
Winner Overall: Gemini For Google Workspace
Google Gemini is an AI-powered assistant that helps with a variety of tasks including writing, coding, research, data analysis and design. Gemini is integrated into Gmail, Docs and Sheets.
Gemini for Google Workspace scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Microsoft Copilot

Big Data
Winner Overall: HPE Ezmeral Data Fabric Software
Hewlett Packard Enterprise’s HPE Ezmeral Data Fabric Software is a platform for data-driven analytics, machine learning and AI workloads. It serves as a secure data store and provides file storage, NoSQL database, object storage and event stream capabilities. The product reduces data silos with a unified data lakehouse, and centrally manages and governs data while accessing it directly where it resides.
HPE Ezmeral Data Fabric Software scored highest overall in this product category, highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Cloudera Open Data Lakehouse
Finalist: Databricks Data Intelligence Platform
Finalist: Microsoft Intelligent Data Platform
Finalist: Snowflake Data Cloud
Finalist: Starburst Galaxy

Business Applications
Winner Overall: SAP S/4HANA
S/4HANA is SAP’s flagship ERP (enterprise resource planning) business application suite that provides finance, accounting, procurement, supply chain, production and employee management capabilities. The software uses AI and machine learning to analyze operational data and automate routine business tasks.
SAP S/4HANA scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Epicor ERP
Finalist: Oracle NetSuite
Finalist: Microsoft Dynamics 365
Finalist: Sage Intacct

Business Intelligence and Data Analytics
Winner Overall: MicroStrategy ONE
MicroStrategy ONE is a cloud-based, AI-powered business intelligence system that turns raw data into actionable insights. The software provides an array of role-based analytical capabilities and offers no-code, low-code and pro-code options for infusing analytics into business operations.
MicroStrategy ONE scored highest overall in this product category and highest for revenue and profit. It tied with the highest scores for customer need.
Subcategory Winner – Technology: Amazon Redshift
Subcategory Winner – Customer Need: Domo Data Experience Platform (tie)
Finalist: Google Cloud BigQuery
Finalist: Qlik Sense
Finalist: Salesforce Tableau

Data Protection, Management and Resiliency
Winner Overall: Veeam Data Platform
Veeam Data Platform data protection and management software is used by businesses to protect, backup, recover and manage their data across on-premises, hybrid and multi-cloud environments. The system protects data across a range of physical servers, cloud instances, applications and virtual machines.
Veeam Data Platform scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Cohesity Data Cloud
Finalist: Commvault Cloud Powered by Metallic AI
Finalist: Dell PowerProtect Data Manager Appliance
Finalist: HYCU R-Cloud
Finalist: Rubrik Security Cloud

Edge Computing/Internet of Things
Winner Overall: Scale Computing Autonomous Infrastructure Management Engine (AIME)
Scale Computing AIME provides the AI orchestration and management functionality within SC//HyperCore and significantly reduces the effort required to deploy, secure, manage and maintain IT infrastructure – including edge computing and IoT systems.
Scale Computing AIME scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Red Hat Device Edge
Finalist: Eaton iCube
Finalist: HPE Edgeline
Finalist: IBM Edge Application Manager
Finalist: Schneider Electric EcoStruxure Micro Data Center R-Series

Hybrid Cloud Infrastructure
Winner Overall: NetApp Hybrid Cloud
NetApp Hybrid Cloud combines public and private clouds, on-premises data centers and edge locations to run distributed workloads including web and content hosting, application development, data analytics and containerized applications.
NetApp Hybrid Cloud scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: IBM Hybrid Cloud
Finalist: Dell Technologies Apex Hybrid Cloud
Finalist: HPE GreenLake
Finalist: Nutanix Hybrid Multicloud
Finalist: VMware Cloud Foundation

MSP Platforms
Winner Overall: Kaseya 365
Kaseya 365 is a subscription-based service for MSPs that provides the core functions needed to manage, secure, backup and automate endpoint devices. The company recently introduced Kaseya 365 User to protect user data and identities in Microsoft 365 and Google Workspace environments.
Kaseya 365 scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: N-able Cloud Commander
Finalist: Atera
Finalist: ConnectWise Asio Platform
Finalist: HaloPSA
Finalist: Syncro Platform

Networking – Enterprise
Winner Overall: Cisco Networking Cloud
Cisco Networking Cloud is Cisco’s AI-native platform built for the global area network that provides unified management, automation, and operational simplicity and security by converging and connecting fragmented on-premises and cloud networks.
Cisco Networking Cloud scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: HPE Aruba Networking Enterprise Private 5G
Finalist: Juniper AI-Native Networking Platform
Finalist: Nile NaaS
Finalist: Prosimo AI Suite for Multi-Cloud Networking

Networking – Wireless
Winner Overall: HPE Aruba Networking Wi-Fi 7 Access Point
HPE Aruba Networking Wi-Fi 7 access point networking devices provide AI-ready, high-performance and secure connectivity for enterprise applications, edge IT and IoT devices. HPE Aruba says the Wi-Fi 7 access points provide up to 30 percent more capacity for wireless traffic than competing products.
HPE Aruba Networking Wi-Fi 7 access point scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Extreme Networks AP5020 universal Wi-Fi 7 access point
Finalist: Fortinet FortiAP 441K Wi-Fi 7 access point
Finalist: Zyxel Wi-Fi 7 access point

Power Protection and Management
Winner Overall: Eaton 9PX 6kVA Lithium-Ion UPS
Eaton touts the 9PX 6kVA Lithium-Ion UPS as ideal for enterprise IT, edge deployment and light industrial applications. It features remote firmware upgrades and integration with leading hyperconverged infrastructure and virtualization platforms.
The Eaton 9PX 6kVA Lithium-Ion UPS scored highest overall in this product category and highest for revenue and profit and for customer need.
Subcategory Winner – Technology: Vertiv Liebert GXT5 Lithium-Ion UPS
Finalist: CyberPower PFC Sinewave 1U UPS
Finalist: Schneider Electric Easy UPS 3-Phase 3M Advanced

Processors – CPUs
Winner Overall: Intel Core Ultra Series
Intel describes its Core Ultra processors as its premier processor line for desktop systems and mobile devices for enabling AI experiences such as copilots, productivity assistants, text and image creation, and collaboration.
Intel Core Ultra Series scored highest overall in this product category and highest for customer need.
Subcategory Winner – Technology: Apple M3
Subcategory Winner – Revenue and Profit: AMD Ryzen Pro 8040 Series
Finalist: AmpereOne
Finalist: Qualcomm Snapdragon X Elite

Processors – GPUs
Winner Overall: Nvidia H200
With its higher performance and expanded memory bandwidth and capacity, Nvidia’s H200 Tensor Core GPU is a popular processor for GenAI and high-performance computing workloads. Nvidia says the H200 also offers improved energy efficiency and lower TCO than the earlier H100.
The Nvidia H200 scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: AMD Instinct MI300X
Finalist: Intel ARC A570M

Public Cloud Platforms
Winner Overall: Microsoft Azure
Microsoft Azure is one of the industry’s leading cloud platforms that provide services for building, running and managing cloud applications. The platform’s wide range of services include compute, storage, analytics and networking capabilities.
Microsoft Azure scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: Oracle Cloud Infrastructure (OCI)
Finalist: Amazon Web Services
Finalist: CoreWeave Cloud
Finalist: Google Cloud Platform
Finalist: Snowflake Data Cloud

SD-WAN
Winner Overall: HPE Aruba EdgeConnect SD-WAN
HPE Aruba EdgeConnect SD-WAN is a software-as-a-service wide area network that provides secure connectivity and data access for hybrid work environments. Features include secure access service (SASE), a single management interface for observing and controlling the WAN and SASE, real-time monitoring and virtual WAN capabilities.
HPE Aruba EdgeConnect SD-WAN scored highest overall in this product category and highest for revenue and profit.
Subcategory Winner – Technology, Customer Need: Extreme Networks Extreme Cloud SD-WAN
Finalist: Cisco Catalyst SD-WAN
Finalist: Fortinet Secure SD-WAN
Finalist: Palo Alto Networks Prisma SD-WAN
Finalist: Zscaler Zero Trust SD-WAN

Security – Cloud and Application Security
Winner Overall: SentinelOne Singularity Cloud Security
SentinelOne Singularity Cloud Security provides cloud security, threat detection and response for servers, virtual machines and containers. Part of SentinelOne’s Singularity platform, Singularity Cloud Security works across private and public clouds, including AWS, Azure and Google Cloud Platform.
SentinelOne Singularity Cloud Security scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: Palo Alto Networks Prisma Cloud
Finalist: CrowdStrike Falcon Cloud Security
Finalist: F5 Distributed Cloud Services Web Application Scanning
Finalist: Orca Cloud Security Platform
Finalist: Tenable Cloud Security
Finalist: Wiz Cloud Security Platform

Security – Data
Winner Overall: IBM Guardium Data Protection
IBM Guardium Data Protection protects sensitive data in the cloud and in on-premises systems. GDP automatically discovers and classifies sensitive data across an enterprise and provides data activity monitoring and analytics, near real-time threat response workflows, and automated compliance auditing and reporting.
IBM Guardium Data Protection scored highest overall in this product category and highest for revenue and profit.
Subcategory Winner – Technology, Customer Need: Zscaler Data Protection
Finalist: ForcePoint ONE Data Security
Finalist: Proofpoint Information Protection
Finalist: Rubrik Security Cloud
Finalist: Wiz DSPM

Security – Email and Web Security
Winner Overall: Mimecast Advanced Email Security
Mimecast Advanced Email Security is a comprehensive, cloud-based email security system that guards email from a range of cyberattacks including spam, viruses and malware. Capabilities include threat intelligence and protection, data leak prevention and secure messaging.
Mimecast Advanced Email Security scored highest overall in this product category and highest for revenue and profit.
Subcategory Winner – Technology, Customer Need: Cloudflare Application Security
Finalist: Abnormal Security Platform
Finalist: Akamai API Security
Finalist: Barracuda Email Protection
Finalist: Proofpoint Threat Protection

Security – Endpoint Protection
Winner Overall: Sophos Intercept X
Sophos Intercept X takes a comprehensive, prevention-first approach to security that Sophos says blocks threats without relying on any single technique. Intercept X provides endpoint detection and response cybersecurity using a wide range of tactics to stop ransomware, breaches, data loss and other advanced threats from impacting end users.
Sophos Intercept X scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: CrowdStrike Falcon Insight XDR
Finalist: Huntress Managed EDR
Finalist: SentinelOne Singularity XDR
Finalist: ThreatLocker Protect
Finalist: Trend Micro Trend Vision One

Security – Identity and Access Management
Winner Overall: CyberArk Workforce Identity
CyberArk Workforce Identity is a SaaS-delivered system to simplify identity and access management across enterprise systems. The product provide unified workforce and B2B access and identity management within a single offering.
CyberArk Workforce Identity scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: Ping Identity PingOne for Workforce
Finalist: Okta Workforce Identity Cloud
Finalist: Microsoft Entra ID
Finalist: OpenText NetIQ Identity Manager
Finalist: SailPoint Identity Security Cloud

Security – Managed Detection and Response
Winner Overall: Huntress Managed Identity Threat Detection and Response
With Huntress Managed Identity Threat Detection and Response (ITDR), formerly MDR for Microsoft 365, Huntress threat experts monitor and respond in real time to critical security threats such as suspicious login activity, privilege escalation attempts, and email tampering and forwarding.
Huntress Managed Identity Threat Detection and Response scored highest overall in this product category.
Subcategory Winner – Technology: SentinelOne Singularity MDR
Subcategory Winner – Revenue and Profit: Sophos MDR
Subcategory Winner – Customer Need: Arctic Wolf MDR
Finalist: CrowdStrike Falcon Complete Next-Gen MDR
Finalist: ThreatLocker Cyber Hero MDR

Security – Network
Winner Overall: Cisco Hypershield
Cisco Hypershield is a distributed security architecture that protects networks, applications and workloads in data centers and cloud environments. Hypershield features an AI-native rules engine, autonomous segmentation and distributed exploit protection.
Cisco Hypershield scored highest overall in this product category and highest for technology.
Subcategory Winner – Revenue and Profit: Fortinet FortiGate (tie)
Subcategory Winner – Revenue and Profit: SonicWall Cloud Secure Edge (tie)
Subcategory Winner – Customer Need: Fortinet FortiGate
Finalist: Sophos XGS Firewall
Finalist: ThreatLocker CyberHero MDR
Finalist: WatchGuard ThreatSync+ NDR

Security – Security Operations Platform
Winner Overall: Arctic Wolf Security Operations
Arctic Wolf Security Operations, renamed Arctic Wolf Aurora Platform in November 2024, is built on an open XDR architecture and is a cloud-based platform that offers a range of services to protect against cyberthreats including managed detection and response, incident response, threat intelligence, managed risk, managed security awareness and a security operations warranty.
It scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: CrowdStrike Falcon Next-Gen SIEM
Finalist: Google Security Operations
Finalist: Microsoft Sentinel
Finalist: Palo Alto Networks Cortex XSIAM 2.0
Finalist: Splunk Enterprise Security

Security – Security Access Service Edge
Winner Overall: Palo Alto Networks Prisma SASE
Palo Alto Networks describes Prisma SASE as a complete AI-powered SASE solution that combines network security, SD-WAN and autonomous digital experience management in a single service. It incorporates the Zero Trust Network Access 2.0 architecture.
Palo Alto Networks Prisma SASE scored highest overall in this product category and highest for technology.
Subcategory Winner – Revenue and Profit: Zscaler Zero Trust SASE
Subcategory Winner – Customer Need: Fortinet FortiSASE
Finalist: Cato SASE Cloud Platform
Finalist: Cisco Secure Access
Finalist: Netskope One SASE

Storage – Enterprise
Winner Overall: NetApp AFF C-Series
The NetApp AFF C-Series storage array platform offers all-flash storage for data centers. The product is designed to provide economical, high-density storage for tier 1 and tier 2 data center workloads and to unify storage environments.
NetApp AFF C-Series scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: Pure Storage FlashArray//E
Finalist: Dell PowerStore
Finalist: HPE Alletra Storage MP
Finalist: Infinidat SSA Express
Finalist: Quantum ActiveScale Z200 Object Storage

Storage – Software-Defined
Winner Overall: Pure Storage Purity
Pure Storage Purity software unifies, manages and protects data in data centers, the cloud or at the edge. Capabilities include data management (including AI-driven array operations, monitoring, analysis and optimization), data replication, data mobility, data reduction, data encryption, disaster recovery and high availability.
Pure Storage Purity scored highest overall in this product category and highest for technology, revenue and profit, and customer need.
Finalist: DDN Infinia
Finalist: Dell PowerFlex
Finalist: HPE GreenLake for Block Storage
Finalist: IBM Software-Defined Storage

Unified Communications and Collaboration
Winner Overall: RingCentral RingCX
RingCentral RingCX is an AI-powered, omnichannel contact center system that combines voice, video and more than 20 digital channels in a single platform. Capabilities include intelligent virtual agent integration, analytics and reports, outbound dialing and agent scripting.
RingCentral RingCX scored highest overall in this product category and highest for technology and customer need.
Subcategory Winner – Revenue and Profit: Intermedia Unite
Finalist: Cisco Webex
Finalist: Microsoft Teams
Finalist: Nextiva Unified Customer Experience Platform

Five companies that won this week

For the week ending November 22, CRN will take a look at companies that have brought their ‘A’ game to the channel, including Viz, Google Cloud, Desktop, Nvidia, DXC and ServiceNow.

Week ending November 22
Topping this week’s Came to Win list is fast-growing cloud security provider Viz thanks to a strategic acquisition around cloud remediation technology.
This week’s list also includes Google Cloud to create a new AI agent partner program and security startup Desktop to launch its first channel program.
Nvidia is here to unveil its new four-GPU “superchip” for advanced AI computing. And IT services provider DXC and workflow automation company ServiceNow have teamed up to create a new center of excellence to accelerate the adoption of ServiceNow’s GenAI offerings.

Viz to acquire cloud remediation startup Daze for $450 million
Cloud and AI security provider Viz this week announced a deal to acquire channel-focused startup Daze to expand the vendor’s cloud and AI security platform with cloud remediation capabilities.
Calcalist reported its price tag Planned acquisition of Dazz At $450 million. CRN has contacted Viz and Daz for comment. A source familiar with the deal confirmed the figure crn,
Daze offers a cloud security platform that focuses on remediation, including capabilities such as application security state management and continuous threat and risk management.
In a post Thursday, Viz co-founder and CEO Assaf Rappaport said Daze brings an “industry-leading remediation engine” that allows Viz to “help security teams correlate data from multiple sources and address application risks in a unified platform.” Will enable to empower to manage.
In July, Daze announced it had raised $50 million in funding, bringing the startup’s total funding since its launch in 2021 to $110 million.
This is the second acquisition in 2024 for fast-growing Viz deal to cloud detection and response provider Gem Security in April, and its third acquisition overall.

Google Cloud launches AI Agent Partner Program to boost GenAI sales, customer growth
Google Cloud makes this week’s list of five winning companies with its plans to take AI agent sales and customer adoption to new heights. Launching Google Cloud AI Agent Ecosystem Program Helping partners build and co-innovate AI agents through new technical and go-to-market resources.
“Through this program, we will provide incentives, product support and co-selling opportunities to help our service and ISV partners bring these solutions to market faster, reach more customers and grow their AI agent businesses.” are increasing,” said Kevin Ichchapurani, president of. Google Cloud’s global partner organization, in a blog post.
Additionally, the cloud giant launched a new AI Agent Space on Google Cloud Marketplace with the goal of enabling customers to more easily find and deploy partner-built AI agents.
Google Cloud plans for its new AI Agent Partner Program to accelerate the development and adoption of AI agents by supporting partners in three key areas: accelerated agent development, market success, and increased customer visibility.

Deskop Unveils First Channel Program to Improve Identity and Access Management with Partners
Staying on the topic of partner program launch, Deskop is Launching your first formal channel event Co-founder Rishi Bhargava asked for the help of solution and service providers to accelerate the development of its simplified platform for customer identity and access management (CIAM). crn,
Bhargava said the startup is taking a “partner-first” approach for its next phase of expansion and is hoping to surpass its current rate of 30 percent of revenue generated through partners.
“We’re starting to see signs where channel partners are already bringing us into bigger and bigger opportunities,” he said. “We are confident that this will lead to huge growth.”
Bhargava said Desktop is looking to expand with the recruitment of more solution and service provider partners, who will gain a formal process to engage with the company as part of the new channel program. Key benefits of the program include deal registration, incentives, pre-sales support and joint marketing programs.

Nvidia unveils 4-GPU GB200 NVL4 superchip
Nvidia debuted its biggest AI “chip” to date on this week’s list — the four-GPU Grace Blackwell GB200 NVL4 Superchip — which is another sign that the company is pursuing its AI computing ambitions. How to expand the traditional definition of semiconductor chips.
Supercomputing announced in 2024 At the event on Monday, the new product is a step up from Nvidia’s recently launched Grace Blackwell GB200 superchip that was revealed in March.
The GB200 NVL4 superchip is designed for a “single server Blackwell solution” running a mix of high-performance computing and AI workloads, said Dion Harris, director of accelerated computing at Nvidia, in a briefing with reporters.

DXC, ServiceNow partner on new Center of Excellence, helping GenAI
Global technology services provider DXC this week unveiled a new Center of Excellence built with the help of ServiceNow to help drive AI adoption using ServiceNow’s GenAI technologies.
new center of excellence The purpose is to demonstrate how the company’s customers can use ServiceNow’s AI capabilities in a practical way, said Howard Bovill, DXC’s executive vice president of consulting and engineering services. This includes working with Now Assist, a generative AI-powered service within the Now platform.
With the Center of Excellence, DXC can leverage all the knowledge it has built with customers around the world by bringing it all into one place, Bovill said.
Another benefit of the Center of Excellence is training opportunities for DXC personnel. ServiceNow has invested its own resources in the DXC center, including providing its own experts to help train DXC staff.
Bovill said the Center of Excellence builds on DXC’s more than 15-year partnership with ServiceNow. Erica Volini, ServiceNow’s senior vice president of global partnerships and channels, said: crn When it comes to GenAI, DXC is one of their company’s most aggressive partners.

10 Hottest Semiconductor Startups of 2024

From Celestial AI to Untethered AI, these startups are trying to challenge Nvidia’s AI computing dominance or deliver complementary chip technologies that could shake up the tech industry.

While Nvidia makes billions of dollars in AI chip spending every quarter, there are many companies and investors who believe there is room for other winners in the AI ​​infrastructure market, whether it’s for chips at the edge or in data centers.
Nvidia may face big threats amd And, to some extent, intelAs well as its largest cloud partners-Amazon Web Services, Microsoft Azure And google cloud-which are building their own AI chips, among other domestic silicon efforts.
[Related: Nvidia Reveals 4-GPU GB200 NVL4 Superchip, Releases H200 NVL Module]
But the AI ​​computing giant is also up against a small army of entrepreneurs and investors who are introducing new chip solutions with new design techniques and big claims about how they can enable faster and more efficient AI calculations.
Semiconductor startups behind these efforts include startups focused on the edge market such as Helo, SiMa.ai and Untether AI, as well as startups making chips for AI data centers such as D-Matrix, Grok Tenstorrent.
However, it’s not all about competition. Some semiconductor startups, such as Enfabrica and Lightmatter, are working on complementary chip technologies that could, for example, speed up the transmission of data between AI servers.
Here are CRN’s 10 hottest semiconductor startups in 2024, which in addition to the above startups, also include Celestial AI and Etched.

divine ai
Top Executive: David Lazowski, Founder and CEO
Celestial AI says it is paving the way for advancements in AI computing by overcoming latency and bandwidth barriers with its photonic fabric optical interconnect technology.
The Santa Clara, California-based startup announced in March that it had raised a “highly oversubscribed” $175 million Series C funding round, which was led by the US Innovative Technology Fund and backed by multiple other investors including the venture arms of AMD, Samsung . as well as Volkswagen Group’s holding company, Porsche SE.
The same month, the startup said that hyperscaler companies – the world’s largest consumers of data center infrastructure – and semiconductor companies “are now designing photonic fabric into optical chiplets as an early step in technology adoption.”
Then in October, Celestial AI announced it had acquired intellectual property from silicon photonics pioneer Rockley Photonics. According to Celestial AI, the deal included issued and pending patents related to silicon photonics, giving the startup one of the strongest IP portfolios in the field of silicon photonics for optical compute interconnects.

D-matrix
Top Executive: Sid Sheth, Co-Founder and CEO
D-Matrix says it is breaking the memory bandwidth barrier for generative AI inference workloads with its new digital in-memory compute (DIMMC) architecture.
The Santa Clara, California-based startup — which is backed by the enterprise arms of Microsoft, Samsung Electronics, and Ericsson — in November announced the launch of its first product, the Corsair PCIE card, which uses the DIMC architecture to “accelerate AI inference.” Uses. workloads with industry-leading real-time performance, energy efficiency and cost savings compared to GPUs and other alternatives.
The PCIe card is now sampling with customers, and will be widely available in the second quarter of next year with support from OEMs and system integrators. Vendors supporting Corsair include Supermicro, GigaIO, and Liquid.

enfabrica
Top Executive: Rochan Shankar, CEO
Enfabrica says it is delivering the fastest network interface controller chip for GPU servers in the industry with silicon that was built from the ground up for the needs of AI data centers.
The Mountain View, California-based startup announced in November that it had raised a $115 million Series C funding round to help commercialize its Accelerated Compute Fabric SuperNIC chip, which it said could deliver up to 3.2 TB/second of bandwidth. Would enable “groundbreaking”. The pilot system will be available in the first quarter of next year.
The ACF SuperNIC supports a high radix of 800-, 400- and 100-Gigabit Ethernet interfaces, as well as 32 network ports and 160 PCIe lanes on a single chip, supporting servers of more than 500,000 GPUs using a two-tier network design. Enables the cluster. “For the highest scale-out throughput and lowest end-to-end latency across all GPUs in the cluster,” it said.

carved out
Top Executive: Gavin Uberti, Co-Founder and CEO
Etched says it is betting its entire business model on what it is calling the world’s first specialized chip for Transformer-based AI models.
The San Francisco, California-based startup announced in June that it had raised $120 million in funding from a wide range of investors, including PayPal co-founder Peter Thiel, Fontinalis, and Skybox Data Centers.
In a blog post, the company claimed that its Sohu chip would be “much faster and cheaper than even Nvidia’s next-generation Blackwell B200 GPUs” when it comes to larger language models such as Transformer-based models, but that won’t be the case. Able to run any model that is not based on Transformer architecture.

grok
Top Executive: Jonathan Ross, Founder and CEO
Grok says its language processing unit enables faster AI inference performance through its cloud service and on-premises compute clusters.
The Mountain View, California-based startup had announced in August that it had raised a $640 million Series D funding round at a valuation of $2.8 billion, led by funds and accounts managed by BlackRock Private Equity Partners. Other investors included the venture arms of Cisco Systems and Samsung Electronics.
Back in May, Grok announced that it had inked a distribution deal with US solutions provider powerhouse CarahSoft to target public sector customers. Then in September, the company announced it had signed a memorandum of understanding with the digital and technology subsidiary of Middle Eastern oil giant Aramco to “establish the world’s largest projected data center in the Kingdom of Saudi Arabia.”

Hi
Top Executive: Orr Dannon, Co-Founder and CEO
Halo is taking on Nvidia by accelerating generative AI workloads with chips that it says are the leaders when it comes to optimizing performance for cost and power.
The Tel Aviv, Israel-based startup announced in April that it had raised $120 million from investors as an extension of its Series C funding round in addition to launching its new Halo-10 accelerator chip, which provides “maximum Enables “ZenAI performance”. For devices like PCs and automotive infotainment systems.
The company has also announced several partnerships, including a deal with Raspberry Pi to provide chips for the Raspberry Pi AI Kit, a deal with Adlink Technology to incorporate its Halo-8 chip into its edge computing platform, And its inclusion involves a deal with SolidRun. Hailo-15H was converted into a system-on-module solution for AI vision applications.

light substance
Top Executive: Nick Harris, Co-Founder and CEO
Lightmatter says it is reinventing AI infrastructure with 3-D stacked photonics chips that can dramatically increase AI cluster bandwidth and performance while reducing energy use.
The Mountain View, California-based startup announced in October that it had raised a $400 million Series D funding round at a valuation of $4.4 billion. The round was led by new investors advised by T. Rowe Price Associates, with participation from Google and the venture arm of Fidelity Management & Research Company, among other investors.
The company said it plans to use the funding to prepare the Passage Photonics chip for “large-scale deployment in partner data centers.” It says the chip is “the first photonic engine to deliver I/O in 3-D,” which is expected to “free up the shoreline” for GPUs and other types of processors to “support more memory.”

SiMa.ai
Top Executive: Krishna Rangasayee, Founder and CEO
SiMa.ai is hoping to displace Nvidia for generative AI with powerful and efficient chips, which it says can handle a wide variety of modalities in one “software-centric” platform.
The San Jose, Calif.-based startup announced in September that it expected to begin sampling its new MLSoC Modalics family of chips with customers by the end of the year that power convolutional neural networks, Transformers, large language models, large multimodal models. And support. Other types of generic AI models are on the sidelines. It says the chips deliver up to 10 times more performance per watt than alternatives.
SiMa.ai announced the new edge AI chip product family after saying in April that it had raised $70 million in funding from investors, including the venture arm of Dell Technologies and Cadence Design Systems executive chairman Lip-Boo Tan. Is.
The company has announced several partnerships this year to commercialize its chips, including a deal with Lanner to integrate its chips into edge AI devices; a distribution deal with Arrow Electronics for Europe, the Middle East and Africa; and an agreement with Cevidia and Supermicro to deliver state-of-the-art equipment with AI video analytics capabilities.

Tenstorant
Top Executive: Jim Keller, CEO
Tencent is blazing a new trail in chip design for AI computing with a business model that combines selling specialized processors with licensing chip technologies for others to use and working with other firms to develop computing solutions.
The Toronto, Ontario-based startup announced in November that it had entered into a strategic partnership with South Korean AI software company Moreh to challenge Nvidia in the AI ​​data center market. The two companies are working on a solution that combines Tentorrent’s neural processing units with Moreh’s software to support a wide range of AI applications, including large language model inference and training.
In February, the company announced a “multi-level partnership deal” with Japan’s Leading-Edge Semiconductor Technology Center, which plans to leverage Tencentor’s RISC-V and chiplet technology for its 2-nanometer edge AI accelerator. Making it. The startup will also serve as a co-design partner for the chip.
Tencentor has also this year announced a partnership with SingularityNet to develop optimized hardware and software architectures for artificial general intelligence and has also launched developer kits and workstations based on its Wormhole processors.

untethered ai
Top Executive: Chris Walker, CEO
Untether AI says its new chip delivers “the world’s fastest, most energy-efficient AI inference” performance for applications running at the edge and in data centers.
The Toronto, Canada-based startup, led by former Intel executive Chris Walker, in October announced the launch of its SpeedAI240 slim AI inference accelerator card, which it said was “recently rated as the world’s lowest latency and Was recognized as achieving the highest throughput.” [peer-reviewed] MLPerf estimation benchmark.
In August, the company announced a “multi-faceted partnership” with Indian AI cloud, modeling and services provider Ola-Crutrim, which will include “co-development of next-generation data center solutions for untethered AI that can be used to infer Will be done for.” Deploying and fine-tuning Crutrim’s generative AI models.”

Nvidia made a lot more money than Intel, AMD last quarter

Nvidia could end its current fiscal year with revenue that is not only more than double that of last year when it overtook Intel in annual sales for the first time. It would also be 64 percent more than the combined full-year revenue projected by Intel and AMD.

Nvidia generated nearly 75 percent more revenue than Intel and AMD in its most recently completed quarters — another sign of how Nvidia has dominated its nearest chip rivals in the fast-growing AI computing market. To be has achieved its dominance.
As disclosed on Wednesday, Nvidia reported $35.1 billion third quarter revenueAn increase of 94 percent year on year. With the majority of the company’s sales occurring through the channel, a larger share of partners are likely to benefit, including system integrators and distributors who have seen a greater share of Nvidia’s revenue with a single, unnamed customer, according to Nvidia’s latest 10. Helped generate an estimated 10 percent or more of. Q filing with the US Securities and Exchange Commission.
[Related: Nvidia: ‘We Are Racing To Scale Supply To Meet Incredible’ Blackwell Demand]
On the other hand, Intel recorded 2.6 times less revenue than Nvidia during almost the same period – $13.3 billion, which shows 6 percent decline year-on-year-While AMD earned almost half as much as Intel – $6.8 billion, which set a record for the company And an increase of 18 percent year-on-year.
Compare that to five years ago when Intel was leading the pack with $19.2 billion in third-quarter revenue, followed by Nvidia with $3 billion and AMD with $1.8 billion. By the end of their respective fiscal years (with Nvidia’s ending a month later in January), Intel finished with $72 billion, followed by Nvidia with $10.9 billion and AMD with $6.7 billion.
Based on recently released fourth-quarter revenue forecasts for each company, the picture is expected to look very different by the end of their current fiscal year: Nvidia is projected to end its 2025 fiscal year this January with $128.6 billion. While Intel and AMD are expected to end 2024 with $52.6 billion and $25.6 billion respectively. (CRN’s 2024 full-year estimate for Intel is based on the midpoint of its fourth-quarter guidance.)
This means that Nvidia could end its current fiscal year with revenue that is not only more than double that of last year, when the company managed to do the same. Overtook Intel in annual sales For the first time. That would be about 63 percent more than Intel’s record $79 billion revenue for 2021 and 64 percent more than the combined full-year revenue projected by Intel and AMD.
Nvidia’s revenue growing so rapidly over the past few years is nothing new. The company made an early bet on the promise of accelerated computing. investment And acquisition which allowed it to create a comprehensive and integrated stack of chips, systems, software and services in a timely manner Generative AI Revolution,
But continued high demand for Nvidia’s chips and systems — not to mention its software-as-a-service and support offerings, which now command an annual run rate of $1.5 billion, according to the company’s Wednesday earnings call — shows. That’s how huge the appetite is for its wares, indicating that Intel and AMD will need to overcome major hurdles. claim a fair share of the AI ​​computing market.

‘We’re in a race to increase supply to meet incredible’ Blackwell demand

Nvidia finance chief Colette Kress said in her third-quarter remarks before CEO Jensen Huang addressed the issues with Blackwell, “There are some supply constraints in both the Hopper and Blackwell systems, and demand for Blackwell will remain the same for several quarters into fiscal 2026.” Expected to exceed supply by 2020.

Nvidia CFO Colette Cress said the company is “racing to deliver at scale to meet the incredible demand” for its recently launched Blackwell GPUs and related systems, which are designed to improve the performance and efficiency of Generator AI. It has been said to take a big leap.
Cress made the comments during the AI ​​computing giant’s Wednesday earnings call, where the company revealed that it third quarter revenue Nearly doubled to $35.1 billion, mainly due to continued high demand for its data center chips and systems.
[Related: Nvidia Reveals 4-GPU GB200 NVL4 Superchip, Releases H200 NVL Module]
In prepared remarks at the latest earnings, Kress said that Nvidia is about to start shipping blackwell productsNow in full production as of January, it plans to increase those shipments in its 2026 fiscal year, in line with the following months.
“Both the Hopper and Blackwell systems have some supply constraints, and demand for Blackwell is expected to exceed supply for several quarters into fiscal 2026,” they wrote.
On the call, Kress said Nvidia is “on track to exceed” its previous Blackwell revenue estimate of several billion dollars for the fourth quarter, which will end in late January, as its “visibility into supply continues to increase.” ”
“Blackwell is a customizable AI infrastructure with seven different types of Nvidia-made chips, multiple networking options, and air- and liquid-cooled data centers,” she said. “Our current focus is on meeting strong demand, increasing system availability and providing our customers with the optimal mix of configurations.”

Jensen Huang addresses Blackwell issues, road map execution

During a question-and-answer session of the call, a financial analyst asked Nvidia CEO Jensen Huang to address Sunday report By industry publication The Information, which details some customer concerns about overheating of Blackwell GPUs in the most powerful configuration of its Grace Blackwell GB200 NVL72 rack-scale server platform.
The GB200 NVL72 is expected to serve as the basis for upcoming major Nvidia AI offerings from major OEMs and cloud computing partners, including Dell Technologies, Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud.
In response to a question about The Information’s report on overheating of the Blackwell GPUs in the GB200 NVL72 system, Huang said that while Blackwell’s production is “all in” with the company exceeding previous revenue projections, he added that engineering What Nvidia does with OEMs and cloud computing partners is “rather complicated.”
“The reason is that although we build the full stack and the full infrastructure, we separate all the AI ​​supercomputers and we integrate them into all the custom data centers and architectures around the world,” he said on the earnings call. Said.
“That integration process is something we have done for generations. We’re pretty good at it, but there’s still a lot of engineering to be done at this point,” Huang said.
Huang noted that Dell Technologies announced Sunday that it has begun shipping its GB200 NVL72 server racks to customers, including emerging GPU cloud service provider CoreWeave. He also mentioned the Blackwell system that is being built by Oracle Cloud Infrastructure, Microsoft Azure and Google Cloud.
“But as you can see from all the systems that are being put in place, Blackwell is in very good shape,” he said. “And as we mentioned earlier, the supply and turnaround we plan to make this quarter exceeds our previous estimates.”
Addressing a question about Nvidia’s ability to execute on its data center road map, which moved to an annual release cadence for GPUs and other chips last yearHuang remained steadfast in his commitment to the accelerated production plan.
“We are on an annual roadmap, and we are looking forward to continuing to execute on that annual roadmap. And by doing so we increase the performance of our platform. But it’s also really important to understand that when we’re able to increase performance and do so at X factors at a time, we’re reducing the cost of training, we’re reducing the cost of inference, and we’re reducing “We are reducing the cost of AI so that it can be more accessible,” he said.

Partner’s opinion on Nvidia growth, investors’ reaction

After Nvidia released its third-quarter earnings, investors reacted, sending the company’s share price down more than 1 percent in after-hours trading.
While the company beat Wall Street expectations on revenue by nearly $2 billion and beat the average analyst estimate for earnings per share by 6 cents, its fourth-quarter revenue estimate came in just slightly higher than Wall Street expected. Was.
Andy Lin, CTO Top Nvidia Partners Mark III Systems in Houston, Texas, told CRN that while demand for Nvidia’s data center GPUs and related systems is “obviously incredibly strong,” it has set a “such a high level” for itself, with triple-digit growth in multiple quarters. have set.
“These numbers are still surprising, especially on a year-over-year comparison. But this is clearly a smaller increase than before,” he said.
However, Lin said, some customers are holding off on making any purchases right now because Nvidia is switching from Hopper-based systems to Blackwell-based systems.
“There are certainly a number of organizations that we look at that certainly haven’t spent [money on new infrastructure and are waiting] On Blackwell. So the question is, how many of them, at what scale, and what will that look like? And I think that might be something that the market is probably underestimating a little bit,” he said.

Nvidia Unveils 4-GPU GB200 NVL4 Superchip, Releases H200 NVL Module

At Supercomputing 2024, the AI ​​computing giant shows off what is its biggest AI ‘chip’ to date – the four-GPU Grace Blackwell GB200 NVL4 Superchip – while it also shows off its H200 NVL PCIE for enterprise servers running AI workloads. Announces the general availability of the module.

Nvidia is unveiling what is its biggest AI “chip” to date – the four-GPU Grace Blackwell GB200 NVL4 Superchip – another sign that the company is expanding its use of semiconductor chips to fulfill its AI computing ambitions. How is stretching the traditional definition.
Announced at the Supercomputing 2024 event on Monday, the product is a step up from Nvidia’s recently launched Grace Blackwell GB200 superchip that was introduced in March as the company’s new flagship. Major AI Computing ProductsThe AI ​​computing giant also announced the general availability of its H200 NVL PCIe modules, which will make the H200 GPUs launched earlier this year more accessible to standard server platforms.
[Related: 8 Recent Nvidia Updates: Nutanix AI Deal, AI Data Center Guidelines And More]
The GB200 NVL4 superchip is designed for “single server Blackwell solutions” running a mix of high-performance computing and AI workloads, Dion Harris, director of accelerated computing at Nvidia, said in a briefing with reporters last week.
These server solutions include Hewlett Packard Enterprise’s Cray Supercomputing EX154n accelerator blade, which was announced last week and packs up to 224 B200 GPUs. The per-HPE Cray Blade Server is expected to be available by the end of 2025.
While the GB200 superchip looks like a sleek black motherboard that combines an Arm-based Grace GPU with two B200 GPUs based on Nvidia’s new Blackwell architecture, the NVL4 product almost doubles the surface area of ​​the superchip with two Grace CPUs and four B200 GPUs. Doubles it. According to an image shared by Nvidia, the board.
Like the standard GB200 Superchip, the GB200 NVL4 uses the fifth generation of Nvidia’s NVLink chip-to-chip interconnect to enable high-speed communications between the CPU and GPU. The company has previously said that this generation of NVLink enables bidirectional throughput per GPU to reach 1.8 TB/s.
Nvidia said the GB200 NVL4 superchip has 1.3TB of coherent memory that is shared across all four B200 GPUs using NVLink.
To demonstrate the computing horsepower of the GB200 NVL4, the company compared it to the previously released GH200 NVL4 superchip, which was originally introduced a year ago as the Quad GH200 and consists of four Grace Hopper GH200 superchips. The GH200 superchip consists of a Grace CPU and a Hopper H200 GPU.
Compared to the GH200 NVL4, the GB200 NVL4 is 2.2 times faster for simulation workloads using MILC codes, 80 percent faster for training 37-million-parameter GraphQL weather forecast AI models, and 80 percent faster for inference over 7-billion-parameters. 80 percent faster. Llama 2 model using 16-bit floating-point precision.
The company did not provide any additional specifications or performance claims.
At a briefing with reporters, Harris said Nvidia’s partners are expected to introduce new Blackwell-based solutions at Supercomputing 2024 this week.
“Blackwell’s rollout is progressing smoothly thanks to the reference architecture, allowing partners to bring products to market faster while adding their own customizations,” he said.

Nvidia releases H200 NVL PCIe module

In addition to revealing the GB200 NVL4 superchip, Nvidia announced that its previously announced H200 NVL PCIe card will become available in partner systems next month.
NVL4 modules include Nvidia’s H200 GPU that launched earlier this year in the SXM form factor for Nvidia’s DGX systems as well as server vendors’ HGX systems. The H200 is the successor to the company’s H100 which uses the same Hopper architecture and helps make Nvidia a leading provider of AI chips for generative AI workloads.
What makes it different from the standard PCIe design is that the H200 NVL has two or four PCIe cards linked together using Nvidia’s NVLink interconnect bridge, which allows for up to 900 GB/s of data per GPU. Enables bidirectional throughput. The product’s predecessor, the H100 NVL, connected only two cards via NVLink.
It is also air-cooled, unlike the H200 SXM which comes with the option of liquid cooling.
According to Harris, the dual-slot PCIe form factor makes the H200 NVL “ideal for data centers with low-power, air-cooled enterprise rack designs with flexible configurations to provide acceleration for every AI and HPC workload, regardless of size.” Makes”.
“Companies can use their existing racks and select the number of GPUs that best suits their needs, from one, two, four or eight GPUs to four with NVLink domain scaling,” he said. “Enterprises can use the H200 NVL to accelerate AI and HPC applications while improving energy efficiency through reduced power consumption.”
Like its SXM cousin, the H200 NVL comes with 141GB of high-bandwidth memory and 4.8 TB/s of memory bandwidth as opposed to the H100 NVL’s 94-GB and 3.9-TB/s capacity, but its maximum thermal design power only increases According to the company, the peak power of the H200 SXM version is 600 watts instead of 700 watts.
This results in the H200 NVL having slightly less horsepower than the SXM module. For example, the H200 NVL can reach only 30 teraflops in 64-bit floating point (FP64) and 3,341 teraflops in 8-bit integer (INT8) performance, while the SXM version can reach 34 teraflops in FP64 and 3,958 teraflops in INT8 performance. Could. (Teraflop is a unit of measurement for one trillion floating-point operations per second.)
According to Nvidia, when it comes to running on the 70-billion-parameter Llama 3 model, the H200 NVL is 70 percent faster than the H100 NVL. As far as HPC workloads are concerned, the company said the H200 NVL is 30 percent faster for reverse time migration modeling.
The H200 NVL comes with a five-year subscription to the Nvidia AI Enterprise Software Platform with Nvidia NIM microservices to accelerate AI development.

‘It’s more tools in the belts of our partners’

‘We wouldn’t be successful in adopting, selling, or selling any of these things without our ecosystem of partners,“Service partners, MSPs, almost the entire constituency,” says Bargaon Balakrishnan, IBM vice president of product management for Power.

IBM will incorporate its Spire accelerator into future Power products – including the Power 11 system to be released next year – and report programs used with its i operating system as part of a series of Progress solution providers. Planning a code helper for the Generator (RPG) language. To the customers.
Bargaon Balakrishnan, IBM vice president of product management for Power, told CRN in an interview that service partners will play a key role in getting customers upgraded from Power10 and other 2025 releases to market.
“We wouldn’t be successful in selling, adopting any of these things without the ecosystem-service partners, the MSPs, almost the entire constituency of our partners,” he said. “Our partners have more tools to have this conversation – either around fit-for-purpose AI, if you like, where customers can move faster, just with the platform they trust and Know. And can now even start with a modernization discussion where the entry point is simple, ‘Hey, let me help you understand and manage your code.’ But it can certainly expand more value creation and value capture opportunities for the channel.
[RELATED: Red Hat Updates Present ‘Huge’ Partner Opportunities in OpenShift, Edge]

IBM Power11 in 2025

According to Armonk, NY-based IBM, the Spire Accelerator brings scalable capabilities to complex AI models and generative AI use cases. It has 32 individual accelerator cores and a Peripheral Component Interconnect Express (PCIe) card on board. The chip sends data from one compute engine to another and uses lower-precision numerical formats to make better use of energy and memory.
Code Assistant for RPGs will leverage GenAI to help software developers understand existing code, create new RPG functions with descriptions in plain English, and automatically generate test cases.
According to the vendor, Code Assistant should save IBM customers the expense, performance degradation, and risk of re-creating RPG-based applications in Java and other languages.
“The goal of this for our customers, for us, is not just to increase the productivity of existing RPG developers, but equally, if not more important, to enable the next generation of talent and developers who have very little skill , move forward at a faster pace and become productive much faster,” Balakrishnan said. “It’s around things like code explanations – explain my vast decades of code – help me generate new freeform RPG code. Help me prepare test scripts so I can make the test functions enterprise-ready, as well as modernize my old fixed-format RPG code to freeform, creating more modern versions of the RPG code base.”
For the next version of Power Processor, Balakrishnan encouraged partners to capture more wallet share from the existing install base.
Balakrishnan said Power remains the leading platform for SAP HANA and other ERP deployments, core banking deployments, Oracle and other commercial database deployments and other mission-critical use cases. Adoption as a service for clients and using OpenShift from IBM subsidiary Red Hat for modernization are some of the growing areas of new business with users.
The shift in IT spending to AI-driven projects makes the technology “the entry point for many infrastructure decisions these days,” he said. “Engaging and participating in that conversation doesn’t mean selling some AI, but rather showing a future-proof platform that can do something today and protect your investment tomorrow for certain use cases and workloads. ”
It’s also not too early to talk to customers about IBM’s quantum-safe approach with Power. In fact, on Wednesday, during its inaugural Quantum Developer Conference in Yorktown Heights, New York, IBM revealed that its researchers have delivered a quantum computer that runs quantum circuits with 5,000 two-qubit gate operations, powered by IBM’s second iteration. Is capable of. Quantum heron.
,[It’s] It is important for service providers to play a bigger role in helping customers in that journey,” he said.
Other compelling AI use cases with Power include fraud detection, anomaly detection, and supply chain optimization. But partners can bring generative AI to traditional AI use cases, such as adding summary generation to summarizing log data for anomaly detection.
Citing the example of hybrid cloud and AI coming together to solve a customer’s problem — as well as IBM developing chips that don’t necessarily create competition with other vendors — he said Nvidia, a large hospital in Asia, Works to detect cancer in patients using the products. Power for training and for inference and deployment.
“It is about incorporating AI everywhere. It’s not about one and only one use case,” he said. “We are seeing an increase in demand more and more [for hybrid solutions] Only due to data sovereignty, supply side considerations. And I think the world is quickly realizing that it is fit for purpose. It is hybrid in nature. This trend is here to stay.”

Nutanix AI deals, AI data center guidelines and more

Recent moves from the AI ​​computing giant include an expanded partnership with hybrid multi-cloud infrastructure vendor Nutanix, new reference architectures for AI data centers, and new peer-reviewed performance numbers for its upcoming Blackwell GPUs.

Nvidia hasn’t announced new GPUs in the past three months, but that doesn’t mean it wasn’t busy finding other ways to expand and solidify its influence in the rapidly growing AI computing market.
For his part, Nvidia CEO and co-founder Jensen Huang has been busy traveling the world and meeting with world leaders, recently meeting with industry giants to underline the company’s confidence about new AI supercomputer deals. What is known as “Sovereign AI”.
[Related: 10 Big Nvidia Executive Hires And Departures In 2024]
According to Nvidia, this refers to the idea that nations should build their own AI infrastructure, workforce, and business networks to advance their economies.
“Denmark recognizes that in order to innovate in AI, the most impactful technology of our time, it needs to be developed domestically,” Huang said at an October summit with Denmark’s King Frederik AI infrastructure and ecosystem have to be promoted.”
But while Huang is making stops in Denmark and other countries like Japan, India and Indonesia, his company has been making moves in other ways in recent months.
These moves include an expanded partnership with hybrid multi-cloud infrastructure vendor Nutanix, new reference architectures for AI data centers, new peer-reviewed performance numbers for its upcoming Blackwell GPUs, the appointment of a prolific Cisco Systems investor and related contributions from Blackwell Are. Open the compute project.
What follows is a summary of these and Nvidia’s other recent updates, arranged in reverse-chronological order from when they were announced.

Nvidia: Blackwell GPU double LLM performance in peer-review test

Nvidia said its upcoming B200 Blackwell GPU is twice as fast as the previous generation H100 GPU in large language model fine-tuning and pre-training.
That’s according to new peer-reviewed results in the latest MLPerf training suite of benchmark tests released Wednesday by MLCCommons.
The AI ​​computing giant said the B200 GPU is 2.2 times faster for fine-tuning with a 70-billion-parameter LAMA 2 model and twice as fast for pre-training with a 175-billion-parameter GPT-3 model compared to the H100 Is fast. Per-GPU basis.
Nvidia said the large language model (LLM) tests were conducted using an AI supercomputer called Nix that is built with the company’s DGX B200 systems, each of which come with eight 180-GB B200 GPUs.
The company also highlighted how ongoing software updates result in performance and feature improvements for its H100 GPUs, which debut in 2022 and enable “the highest performance among available solutions” in new MLPerf training tests. Did.
Among the results presented in the latest MLPerf training test suite, Nvidia said it demonstrated that the H100 achieved a 30 percent increase in training for the GPT 175B model compared to when the benchmark was initially released.
It also demonstrated the impact of software improvements in other tests.
“The scale and performance of Nvidia Hopper GPUs has more than tripled from last year on the GPT-3 175b benchmark. Additionally, on the Llama 2 70B LoRa fine-tuning benchmark, Nvidia increased performance by 26 [percent] Using the same number of Hopper GPUs, reflecting continued software enhancements,” the company said.

Nutanix expands partnership with Nvidia for new cloud AI offering

Nutanix announced Tuesday that it has expanded its partnership with Nvidia through a new cloud-native offering called Nutanix Enterprise AI.
The hybrid multi-cloud infrastructure vendor said Nutanix Enterprise AI can be deployed “on any Kubernetes platform, at the edge, in core data centers, and on public cloud services” such as Amazon Web Services’ Elastic Kubernetes Service, Microsoft Azure Google’s Managed Kubernetes Service, and Google Cloud’s Google Kubernetes Engine.
“With Nutanix Enterprise AI, we are helping our customers easily and securely run GenAI applications on-premises or in the public cloud. Nutanix Enterprise AI can run on any Kubernetes platform and allows their AI applications to run in their own secure space with predictive cost models, Thomas Cornely, senior vice president of product management at Nutanix, said in a statement.
Nutanix said the cloud-native offering can be deployed with Nvidia’s full-stack AI platform and has been validated to work with the Nvidia AI enterprise software platform, including nvidia nim microservices Which are designed to enable “secure reliable deployment of high-performance AI model inference”.
Nutanix is ​​part of the Enterprise AI company gpt-in-a-box 2.0 device, which it said is part of the Nvidia-certified system program to ensure optimal performance.

Nvidia replaces Intel in Dow index

Nvidia replaced Intel on the Dow Jones Industrial Average on Nov. 8, as the AI ​​computing giant continues to put competitive pressure on the embattled chipmaker.
S&P Dow Jones Indices, the organization behind the DJIA, said in a statement A week before that the change is meant to “ensure more representative performance in the semiconductor industry.”
While Nvidia’s share price has risen more than 200 percent since the beginning of the year, Intel’s shares have fallen nearly 40 percent over the same period.
According to S&P, “The DJIA is a value-weighted index, and thus consistently low-priced stocks have minimal impact on the index.”

Nvidia adds veteran astronaut Ellen Ochoa to board

Nvidia announced in early November that it has appointed veteran astronaut and former NASA director Ellen Ochoa to its board of directors.
The AI ​​computing giant said the addition of Ochoa, who was the first Latina astronaut in space and former director of NASA’s Johnson Space Center in Houston, brings the board to 13 members.
“Ellen’s extraordinary experience speaks volumes for her role as a pioneer and leader,” Nvidia CEO Jensen Huang said in a statement. “We look forward to joining Nvidia’s board on our continued journey to build the future of computing and AI.”

Nvidia unveils enterprise reference architecture for AI data centers

Nvidia on Oct. 29 revealed what it called an Enterprise Reference Architecture, which is intended to serve as a guideline for partners and customers building AI data centers.
The AI ​​computing giant said these guidelines are meant to help partners and customers build AI data centers faster and capture business value from AI applications, while ensuring they operate at optimal performance levels in a secure manner. Are.
Server vendors participating in the Enterprise Reference Architecture Program include Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro.
The Enterprise Reference Architecture includes a comprehensive set of recommendations for accelerated infrastructure, AI-optimized networking, and nvidia ai enterprise Software Platform.
Accelerated Infrastructure Guidelines provide recommendations for GPU-accelerated servers Nvidia-certified system programsWhich ensures that such servers are optimized to deliver the best performance.
The networking guidelines explain how to “provide optimal network performance” with Nvidia’s Spectrum-X Ethernet platform and Bluefield-3 data processing units. They also provide “guidance on optimal network configuration at multiple design points to address different workload scale requirements”.
Nvidia AI Enterprise includes Nemo and NIM microservices to “easily build and deploy AI applications” as well as Nvidia Base Command Manager Essentials for “infrastructure provisioning, workload management, and resource monitoring.”

Nvidia contributed to the Blackwell Platform design for Open Compute project

Nvidia announced on October 15 that it has contributed the “foundational elements” of its Blackwell accelerated computing platform to the Open Compute Project with the goal of enabling widespread adoption in the data center market.
The Blackwell platform in question is Nvidia’s upcoming GB200 NVL72 rack-scale system that comes with 36 GB200 Grace Blackwell superchips, each of which features an Arm-based, 72-core Grace GPU paired with two B200 GPUs.
Nvidia’s contribution to the Open Compute Project (OCP) focuses on the electro-mechanical design of the GB200 NVL72. Elements contributed by Nvidia include rack architecture, compute and switch tray mechanical, liquid-cooling and thermal environment specifications, as well as NVLink cable cartridge volumetrics.
The AI ​​computing giant said it has also expanded support for OCP standards to its Spectrum-X Ethernet platform. According to the company, this will let customers use Spectrum-X’s “adaptive routing and telemetry-based congestion control” to accelerate Ethernet performance for scale-out AI infrastructure.
“Building on a decade of collaboration with OCP, NVIDIA is working with industry leaders to shape the specifications and designs that will be implemented throughout the data center,” Nvidia CEO Jensen Huang (pictured) said in a statement. can be widely adopted.” “By advancing open standards, we are helping organizations around the world harness the full potential of accelerated computing and build the AI ​​factories of the future.”

Nvidia hires top Cisco inventor amid massive networking sales

Nvidia recently hired the 25-year Cisco Systems engineering veteran, once credited as the switching giant’s most prolific inventor, to lead the development of AI and networking architecture at the AI ​​computing giant.
JP Wasseypur, a recently departed Cisco Fellow who was most recently vice president of machine learning and AI engineering for networking, announced He joined Nvidia last month as a senior distinguished engineer and chief architect of AI and networking, a LinkedIn post on October 2 said.
Vasseur’s announcement comes a little more than a month after Nvidia CFO Colette Cress said the company’s Spectrum-X line of Ethernet networking products for data centers could be a “multibillion-dollar product within a year.” The line is on its way to launch.”

Nvidia launches NIM Agent Blueprint to boost enterprise AI

Nvidia announced on August 27 the launch of “pre-trained, customizable AI workflows” that can help enterprises more easily build and deploy custom generative AI applications.
The AI ​​computing giant is calling these workflows NIM Agent Blueprints, which come with sample applications built with Nvidia Nemo, Nvidia NIM, and partner microservices, as well as reference code, customization documentation, and a Helm chart for deployment.
Channel partners selected by NVIDIA to offer NIM Agent Blueprint include Accenture, Deloitte, SoftServe and World Wide Technologies.
Nvidia revealed microservices Nvidia unveiled the addition of its AI enterprise software platform at the GTC 2024 event in March. At the time, the company said they aimed to help businesses “develop and deploy AI applications faster while retaining full ownership and control of their intellectual property.”
With the NIM Agent Blueprint, enterprises can customize generative AI applications using their own proprietary data and continuously refine them based on user feedback.
The initial set of NIM agent blueprints include workflows for digital humans in customer service, generative virtual screening in computer-assisted drug discovery, and multimodal PDF extraction in retrieval-enhanced generation.

Red Hat update presents ‘huge’ partner opportunities in OpenShift, Edge

Kirsten Newcomer, senior director of hybrid cloud platforms at Red Hat, said partners should see ‘a huge opportunity with OpenShift virtualization,’ especially with continued disappointment over pricing changes at rival VMware.

Updates to Red Hat OpenShift, OpenShift AI, edge devices and Developer Hub should offer partners more ways to do business with customers, executives at the open source tool vendor and IBM subsidiary tell CRN.
Improved capabilities around virtualization in OpenShift, more model training support in OpenShift AI, improved capabilities for lower latency in edge devices, and new artificial intelligence templates in the Developer Hub are some of the biggest news the Raleigh, NC-based vendor announced at the annual KubeCon event. Removed during. , which runs through Friday in Salt Lake City.
In response to a question from CRN during a virtual press conference, Kirsten Newcomer, senior director of hybrid cloud platforms at Red Hat, said partners should see “a huge opportunity with OpenShift virtualization,” especially given continued pricing changes at rival VMware. With disappointment.
He also encouraged partners to move VMware workloads to Red Hat, looking at Red Hat’s migration toolkit and talking to customers about other ways to modernize applications and move them to the cloud through OpenShift or even through OpenShift. Enabled to run apps in on-premises.
[RELATED: Red Hat Partner Program Updates Include Incentives, New Digital Experience]

red hat cubecon 2024

“It’s SI, for consultancy, to help customers with that migration and to help the teams responsible for OpenShift, to help the teams responsible for apps running in VMs, help them adopt this environment, get comfortable with it, get familiar with it.” There is a great opportunity to help. ,” said the newcomer. “It really sets an organization on a path that improves their ability to modernize their applications because they can become familiar with the environment, even if they have yet to move into microservices or other types of more modern applications.” Don’t be prepared for.”
Red Hat among CRN’s 2024 channel heads Said About 80 per cent of total sales come through indirect channels and alliance relationships and it expects to increase the number of channel partners it works with within the next 12 months.
One of the major announcements from Red Hat at KubeCon 2024 was the general availability (GA) of OpenShift 4.17. According to Red Hat, this version improves safe memory oversubscription for virtual machines and gives users a technical preview of storage live migration between devices and classes while the VM is running.
Also in the technology preview is native network isolation for namespaces and a confidential computation verification operator to improve data protection and security.
Red Hat also revealed that its OpenShift Lightspeed AI-powered virtual assistant has been moved to Technology Preview.

red hat openshift ai

According to the vendor, Red Hat OpenShift AI 2.15 will become generally available later this month.
OpenShift, released last year, supports AI model development, training, service, automation, and other predictive and generative AI use cases. Part of the updates include a technical preview of a model registry to manage versions, metadata and model artifacts, and tools to detect data drift and bias.
According to Red Hat, the new OpenShift AI also supports Nvidia Nimh microservices and Advanced Micro Devices (AMD) graphics processing units (GPUs).
Jeff DeMoss, Red Hat’s AI product management director, said on the call that solution providers should see “tremendous opportunities” to develop domain- and industry-specific offerings through OpenShift AI.
“They can leverage their knowledge in a specific domain or industry, and not just for general use cases,” DeMoss said. “They can solve more packaged use cases and patterns that they’re seeing that are unique to specific verticals.”

red hat device edge

Version 4.17 of Red Hat’s Device Edge includes new low latency and near real-time capabilities to appeal to use cases ranging from autonomous vehicles to industrial settings.
In response to a question from CRN during a virtual press conference, Shobhan Lakkapragada, Red Hat’s senior director of product management for Edge, called Edge a “huge opportunity area in which consultants, global systems integrators can play a big role.” Because there’s a lot of industry-specific changes, industry-specific knowledge that our SI partners and MSPs can bring as well.”
“We are very interested in partnering with SIs and advisors to expand into this new market,” Lakkapragada said. “We are relatively new in this area. I would say, a few years. Therefore many end customers in this sector are business-related decision-makers. And they are seeing all the changes happening in the IT world. And that’s what they want to bring to operations technology. This is where SI can play a big role in helping end customers make the transition.”

Red Hat Developer Hub

Red Hat created five new templates focused on common AI use cases available in its Developer Hub offering: audio-to-text, chatbots, code generation, object detection, and retrieval augmented generation (RAG) chatbots.
According to Red Hat, the developer hub, which launched this year, has more than 20,000 developers on the platform.
In response to a question from CRN during a virtual press conference, Balaji Sivasubramanian, Red Hat’s senior director of developer tools, said that the Developer Hub is “a great play for our value-added OR service partners” and “definitely “It’s a great job” opportunity, especially when it comes to driving customer AI adoption and developer productivity.
“Deloitte, they’re actually working on the Developer Hub not only for their internal use case – the internal developers themselves – but they’re also offering solutions based on the Developer Hub to their end customers,” Sivasubramanian said.
The high level of customization of the Developer Hub to internal developer portals within enterprises “creates a tremendous opportunity for these value-added services, SI partners are able to take this and customize it for the customer offering and use case, ” he said. “I see a lot of partners already lining up to take over and market our product.”

neural magic acquisition

During CubeCon, Red Hat also revealed that it has signed a deal to purchase Neural Magic, a Somerville, Mass.-based upstart that provides software and algorithms for generative AI inference workloads.
Part of Red Hat’s attraction to neural magic is the upstart’s leadership in VLLM, which spun out of the Massachusetts Institute of Technology (MIT) in 2018, an open source project for model serving that supports all major model families and advanced inference acceleration research. Supports.
VLLM also supports AMD GPUs, Amazon Web Services’ Neuron, Google Tensor Processing Units (TPU), Intel Gaudi, Nvidia GPU, x86 Central Processing Units (CPU). And other hardware backends, according to Red Hat.
Although Red Hat did not say when it expected to close the acquisition, the seller expects the partners to benefit.
“Together with Red Hat and Neural Magic, general computing and infrastructure partners powering AI will be able to better scale AI across all platforms and accelerators,” Red Hat said in a statement.
The vendor said that “ISV partners building valuable solutions to help meet today’s unique business challenges will receive stronger estimates and performance to integrate with their offerings” and “OEM partners will have better open source access to GenAI.” Will be able to take advantage of the infrastructure.” ,