Red Hat update presents ‘huge’ partner opportunities in OpenShift, Edge

Kirsten Newcomer, senior director of hybrid cloud platforms at Red Hat, said partners should see ‘a huge opportunity with OpenShift virtualization,’ especially with continued disappointment over pricing changes at rival VMware.

Updates to Red Hat OpenShift, OpenShift AI, edge devices and Developer Hub should offer partners more ways to do business with customers, executives at the open source tool vendor and IBM subsidiary tell CRN.
Improved capabilities around virtualization in OpenShift, more model training support in OpenShift AI, improved capabilities for lower latency in edge devices, and new artificial intelligence templates in the Developer Hub are some of the biggest news the Raleigh, NC-based vendor announced at the annual KubeCon event. Removed during. , which runs through Friday in Salt Lake City.
In response to a question from CRN during a virtual press conference, Kirsten Newcomer, senior director of hybrid cloud platforms at Red Hat, said partners should see “a huge opportunity with OpenShift virtualization,” especially given continued pricing changes at rival VMware. With disappointment.
He also encouraged partners to move VMware workloads to Red Hat, looking at Red Hat’s migration toolkit and talking to customers about other ways to modernize applications and move them to the cloud through OpenShift or even through OpenShift. Enabled to run apps in on-premises.
[RELATED: Red Hat Partner Program Updates Include Incentives, New Digital Experience]

red hat cubecon 2024

“It’s SI, for consultancy, to help customers with that migration and to help the teams responsible for OpenShift, to help the teams responsible for apps running in VMs, help them adopt this environment, get comfortable with it, get familiar with it.” There is a great opportunity to help. ,” said the newcomer. “It really sets an organization on a path that improves their ability to modernize their applications because they can become familiar with the environment, even if they have yet to move into microservices or other types of more modern applications.” Don’t be prepared for.”
Red Hat among CRN’s 2024 channel heads Said About 80 per cent of total sales come through indirect channels and alliance relationships and it expects to increase the number of channel partners it works with within the next 12 months.
One of the major announcements from Red Hat at KubeCon 2024 was the general availability (GA) of OpenShift 4.17. According to Red Hat, this version improves safe memory oversubscription for virtual machines and gives users a technical preview of storage live migration between devices and classes while the VM is running.
Also in the technology preview is native network isolation for namespaces and a confidential computation verification operator to improve data protection and security.
Red Hat also revealed that its OpenShift Lightspeed AI-powered virtual assistant has been moved to Technology Preview.

red hat openshift ai

According to the vendor, Red Hat OpenShift AI 2.15 will become generally available later this month.
OpenShift, released last year, supports AI model development, training, service, automation, and other predictive and generative AI use cases. Part of the updates include a technical preview of a model registry to manage versions, metadata and model artifacts, and tools to detect data drift and bias.
According to Red Hat, the new OpenShift AI also supports Nvidia Nimh microservices and Advanced Micro Devices (AMD) graphics processing units (GPUs).
Jeff DeMoss, Red Hat’s AI product management director, said on the call that solution providers should see “tremendous opportunities” to develop domain- and industry-specific offerings through OpenShift AI.
“They can leverage their knowledge in a specific domain or industry, and not just for general use cases,” DeMoss said. “They can solve more packaged use cases and patterns that they’re seeing that are unique to specific verticals.”

red hat device edge

Version 4.17 of Red Hat’s Device Edge includes new low latency and near real-time capabilities to appeal to use cases ranging from autonomous vehicles to industrial settings.
In response to a question from CRN during a virtual press conference, Shobhan Lakkapragada, Red Hat’s senior director of product management for Edge, called Edge a “huge opportunity area in which consultants, global systems integrators can play a big role.” Because there’s a lot of industry-specific changes, industry-specific knowledge that our SI partners and MSPs can bring as well.”
“We are very interested in partnering with SIs and advisors to expand into this new market,” Lakkapragada said. “We are relatively new in this area. I would say, a few years. Therefore many end customers in this sector are business-related decision-makers. And they are seeing all the changes happening in the IT world. And that’s what they want to bring to operations technology. This is where SI can play a big role in helping end customers make the transition.”

Red Hat Developer Hub

Red Hat created five new templates focused on common AI use cases available in its Developer Hub offering: audio-to-text, chatbots, code generation, object detection, and retrieval augmented generation (RAG) chatbots.
According to Red Hat, the developer hub, which launched this year, has more than 20,000 developers on the platform.
In response to a question from CRN during a virtual press conference, Balaji Sivasubramanian, Red Hat’s senior director of developer tools, said that the Developer Hub is “a great play for our value-added OR service partners” and “definitely “It’s a great job” opportunity, especially when it comes to driving customer AI adoption and developer productivity.
“Deloitte, they’re actually working on the Developer Hub not only for their internal use case – the internal developers themselves – but they’re also offering solutions based on the Developer Hub to their end customers,” Sivasubramanian said.
The high level of customization of the Developer Hub to internal developer portals within enterprises “creates a tremendous opportunity for these value-added services, SI partners are able to take this and customize it for the customer offering and use case, ” he said. “I see a lot of partners already lining up to take over and market our product.”

neural magic acquisition

During CubeCon, Red Hat also revealed that it has signed a deal to purchase Neural Magic, a Somerville, Mass.-based upstart that provides software and algorithms for generative AI inference workloads.
Part of Red Hat’s attraction to neural magic is the upstart’s leadership in VLLM, which spun out of the Massachusetts Institute of Technology (MIT) in 2018, an open source project for model serving that supports all major model families and advanced inference acceleration research. Supports.
VLLM also supports AMD GPUs, Amazon Web Services’ Neuron, Google Tensor Processing Units (TPU), Intel Gaudi, Nvidia GPU, x86 Central Processing Units (CPU). And other hardware backends, according to Red Hat.
Although Red Hat did not say when it expected to close the acquisition, the seller expects the partners to benefit.
“Together with Red Hat and Neural Magic, general computing and infrastructure partners powering AI will be able to better scale AI across all platforms and accelerators,” Red Hat said in a statement.
The vendor said that “ISV partners building valuable solutions to help meet today’s unique business challenges will receive stronger estimates and performance to integrate with their offerings” and “OEM partners will have better open source access to GenAI.” Will be able to take advantage of the infrastructure.” ,

AMD says it’s laying off 4 percent of its workforce amid AI chip push

“As part of aligning our resources with our greatest growth opportunities, we are taking several targeted steps that will unfortunately result in the reduction of approximately 4 percent of our global workforce,” an AMD spokesperson said in a statement to CRN. There will be a shortage.

AMD said it is laying off about 4 percent of its global workforce to focus on its “biggest growth opportunities,” including an effort to challenge Nvidia’s AI chip dominance.
An AMD spokesperson said in a statement to CRN, “As part of aligning our resources with our greatest growth opportunities, we are taking several targeted steps that will unfortunately result in the loss of approximately 4% of our global workforce. There will be a percentage reduction.”
“We are committed to treating affected employees with respect and supporting them during this transition,” the representative said.
[Related: Analysis: Intel’s AI Chip Efforts Stall As AMD Gets A Boost Against Nvidia]
While AMD’s share price is 3.9 percent higher than at the beginning of the year, shares have fallen more than 13 percent since its disclosure by the chip designer. third quarter earnings Report in late October that its fourth-quarter revenue forecast fell short of expectations.
An AMD spokesperson did not say how many employees would be affected by the layoffs or how many are currently employed.
AMD had about 26,000 employees as of last year, according to the company’s annual 10-K filing with the U.S. Securities and Exchange Commission from January.
According to LinkedIn data, AMD’s workforce has grown 24 percent in the past six months and 33 percent in the past year.
The layoffs were first reported wccftech,

AMD hit with layoffs after mixed earnings report

The layoff announcement comes two weeks after AMD said it more than doubled its data center business and increased client processor sales by 29 percent in the third quarter compared with the same period last year.
However, at the same time, AMD’s embedded chip revenue declined 25 percent year-over-year, while its gaming chip revenue declined 69 percent in the third quarter.
This resulted in total revenues of $6.8 billion in the third quarter, up 18 percent compared to the same period last year and up 17 percent sequentially. The company’s gross margin expanded 3 points year over year to 50 percent and its gross profit rose 24 percent year over year to $3.4 million, while its earnings per share rose 161 percent to 47 cents.
In its third quarter report, the company said it expected to have revenue of about $7.5 billion in the fourth quarter, which would be an increase of about 22 percent year-over-year and 10 percent sequential growth.
AMD’s third-quarter revenue exceeded Wall Street expectations by nearly $100 million, while its adjusted earnings per share were in line with analyst estimates. However, its fourth-quarter revenue forecast fell slightly short of Wall Street expectations.

Instinct GPU, EPYC CPUs drive most growth

According to the company, AMD’s data center revenue in the third quarter increased 122 percent year-over-year to a record $5.3 billion, driven by a surge in Instinct GPU shipments combined with increased EPYC CPU sales. Had “strong ramp”.
On the third quarter earnings call, AMD CEO Lisa Su said the company was raising its Instinct GPU 2024 sales forecast to $500 million from $500 million due to the completion of “some important customer milestones” such as meeting reliability requirements in data centers. Was capable of upgrading to 5.5 billion. and optimizing chips on certain AI workloads.
Two key customers that helped fuel sales in the third quarter were Microsoft and Meta, both of which expanded their usage AMD’s MI300X GPU According to Su, for internal workload.
While Microsoft is using MI300X to run several Copilot services powered by OpenAI’s GPT-4 AI models, Meta has widely deployed MI300X to power its large-scale predictive infrastructure, including specialized This includes using the MI300X to serve all live traffic for their most demanding llamas. [405-billion-parameter] Marginal model,” she said.
The company also saw the availability of MI300X public cloud instances expand among Microsoft, Oracle Cloud, and smaller cloud providers, while “many startups and industry leaders” such as Databricks and Essential AI saw “strong” adoption of such instances.
As for AMD’s EPYC CPU business, Su said the company has seen EPYC-based public cloud instances from Microsoft, Amazon Web Services and others grow 20 percent year-over-year in the third quarter to more than 950.
According to Su, these examples were used by a growing number of enterprise customers, including Adobe, Boeing, Micron, Nestle, Slack, Synopsys, Tata, and others.
For enterprises using on-premises servers, AMD saw “strong double-digit percentage” year-over-year growth in EPYC sales for the fifth consecutive quarter, according to Su. At the same time, Dell Technologies, Hewlett Packard Enterprise, Lenovo and other server vendors expanded the number of platforms supporting fourth-generation EPYC CPUs by 50 percent in the quarter.
“We are building strong momentum with large enterprise customers, highlighted by wins with large technology, energy, financial services and automotive companies in the third quarter, including Airbus, Daimler Trucks, FedEx, HSBC, Siemens, Walgreens and others Is.” Su said on the earnings call.

AI PC demand helps AMD boost customer sales

In AMD’s traditional client processor business, the company said it saw “strong demand” for the latest desktop and laptop processors based on its Zen 5 architecture.
This includes a significant increase in sales of its Ryzen AI 300 processors, which are the company’s first chips that are expected to support Microsoft’s Copilot+ PC features in the next-generation AI PCs that began hitting the market in the last few months Are. Microsoft has previously said it expects to bring CoPilot+ PC features to eligible x86-based laptops in November.
Su said AMD has seen “significant double-digit percentage” growth in desktop channel sales thanks to the company’s recently released Ryzen 9000 processors.
Additionally, according to the CEO, the chip designer “made good progress expanding its presence in the commercial PC market in the quarter by closing several large deals with Astra Zeneca, Bayer, Mazda, Shell, Volkswagen and other enterprise customers.”
With AMD’s recently launched Ryzen AI Pro 300 series, Su said the company’s business prospects are looking even better, with both HP Inc. and Lenovo “expanding the number of Ryzen AI Pro platforms to be introduced in 2024.” “On track to more than triple”. The company also expects there to be more than 100 Ryzen AI Pro platforms on the market next year.
The growing availability of AMD-based commercial PCs puts the company “well positioned for share gains as businesses refresh millions of Windows 10 PCs that will no longer receive Microsoft technical support starting in 2025,” Su said.

Embedded, gaming businesses struggle

What dragged down AMD’s embedded business in the third quarter was “ongoing softness in the industrial market,” though Su said demand is slowly recovering, primarily from customers using its chips for testing and simulation purposes. Thanks to the companies.
According to the CEO, the company is also seeing momentum for its Versal family of adaptive systems-on-chips, which are being widely adopted by many aerospace customers, including SpaceX.
“Design win momentum is very strong across our portfolio, which is tracking to grow more than 20 percent year-over-year in 2024 and our embedded business will be faster than the overall market in the coming years,” Su said. “Putting us in a good position to grow from.”
For AMD’s gaming business, the main hurdle was a decline in semi-custom sales as Microsoft and Sony’s latest video game consoles that use AMD chips are now several years into their respective lifecycles, resulting in reduced consumer demand. Is.
However, Su said Sony plans to soon release the PlayStation 5 Pro, which comes with a new AMD semi-custom system-on-chip “that expands our multi-generation partnership.”
The company’s gaming graphics business also suffered losses, with revenue declining year-over-year as AMD prepares to “transition to our next-generation Radeon GPUs” based on its RDNA 4 architecture, according to Su. These GPUs are on track to launch in early 2025.

Intel-owned Altera’s top executive retires amid stake sale, IPO plans

Shannon Poulin, COO of Intel programmable chip business Altera, says he has retired from the company as Intel looks to sell a stake in the independent subsidiary ahead of a planned initial public offering.

The COO of Intel programmable chip business Altera said he has retired from the company as the parent company looks to sell a stake in the independent subsidiary ahead of a planned initial public offering.
Shannon Palin, a 25-year Intel sales veteran who became Altera’s COO a year ago, has announced his departure. linkedin post On Monday he said he was “excited to think about the next chapter” after leading Altera “once again on the path to stand-up and IPO.”
[Related: Exclusive: Altera CEO Says Intel’s IPO Plan For FPGA Unit ‘Has Not Changed’]
“The technology world is a collaborative ecosystem where no company can survive alone. “I cherish the relationships I have built with customers and partners who are doing incredible things using Intel/Altera products,” he wrote.
An Altera spokesperson confirmed to CRN that Palin has left the company.
“We thank Shannon for her tremendous contributions over many years of service and leadership. He was a valuable member of our team and we wish him all the best in his future endeavors,” the representative said in a statement.
Acquired by Intel for $16.7 billion in 2015altera design fpgaShort for field-programmable gate arrays, which consist of semiconductor chips that can be reprogrammed, unlike standard CPUs and ASICs (application-specific integrated circuits) for a variety of reasons, including advanced functionality.
Palin was Altera’s No. 2 executive since Intel announced plans more than a year ago to separate the business, formerly known as Programmable Solutions Group, into an independent subsidiary that reclaimed its original brand nameAs Said to be in October 2023Intel aims to sell a stake in the business and then hold an IPO for Altera by 2026.
Before becoming COO of Altera, Paulin served as corporate vice president and general manager of Programmable Solutions Group for more than two years. Before this, he organized a variety of salesMarketing and product roles, including corporate vice president and general manager of global markets and partners.
In Intel’s third quarter earnings report last ThursdayIt said Altera’s revenue declined 44 percent year over year, but rose 14 percent sequentially to $412 million, with its net income rising by $9 million. The FPGA business has been growing its revenue and profitability since the first quarter after experiencing a clear slowdown in demand last year.
“With an increasingly competitive roadmap, the business is well-positioned to show continued top- and bottom-line improvement,” Intel CEO Pat Gelsinger said in an earnings call last week.
While Intel had been planning to use the proceeds of Altera’s stake sale and IPO to help fund Gelsinger’s expensive buyback plan, it took on a new urgency when the semiconductor giant said the need to generate additional cash. Created emotion. in August That it would cut more than 15,000 jobs and $10 billion in costs in response to worsening financial conditions.
Sandra Rivera, who was named CEO of Altera when the spin-off plan was announced, told CRN in September that Intel’s plans for Altera “have not changed”, contradicting a report that said The semiconductor giant was planning to submit an offer to sell completely. Of altera.
“We are executing on the plan, which is not a sale of Altera, but it is selling a stake in the business, which has always been the plan, which we have communicated about for over a year, and we feel comfortable doing so.” Have an IPO in 2026. That’s the plan,” he said in the interview.
In mid-October, CNBC informed Intel is trying to sell at least a minority stake in its Altera unit in a transaction that would raise several billion dollars of cash for the chip maker and value the subsidiary at about $17 billion.
Gelsinger confirmed Intel’s plans for Altera in an earnings call last week.
“Consistent with what we have said previously, we are focused on selling our stake in Altera on the way to our IPO in the coming years. To that end, we have begun discussions with potential investors and hope to reach a conclusion in early 2025.”

Products Of The Year 2024: The Finalists

CRN staff compiled the top partner-friendly products that launched or were significantly updated over the last year. Now it’s up to solution providers to choose the winners.

Application Performance and Observability
As more applications run in hybrid-cloud and multi-cloud environments, maintaining application performance has becoming a more complex task. Application performance management and observability tools help IT organizations maintain the health, performance and user experience of business applications, according to market researcher Gartner. Such tools are used by IT operations managers, site reliability engineers, cloud and platform teams, application developers and software product owners.
Datadog Observability Platform
Dynatrace Unified Observability Platform
Grafana 11
IBM Instana Observability
New Relic Observability Platform
Splunk Observability Cloud

Artificial Intelligence: AI PCs
Everyday information workers and consumers are adopting the rapidly growing number of AI applications, copilots and other AI-driven software to improve their productivity and creativity. That’s fueling demand for personal computers with specialized processors, hardware and software to handle AI tasks. Global AI PC unit shipments are expected to exceed 43 million this year, according to market researcher Gartner, and soar to more than 114 million in 2025.
Acer TravelMate P4 14 (AMD)
Apple MacBook Pro (M3)
Dell Latitude 7455 (Qualcomm)
HP EliteBook 1040 G11 (Intel)
Lenovo ThinkPad 14S Gen 6 (Qualcomm)
Samsung Galaxy Book4 Pro (Intel)

Artificial Intelligence: Productivity Suites
Copilots, AI-powered assistants and other AI-based productivity software have become the most popular vehicle for users to tap into the power of artificial intelligence technology. These tools can help with everyday tasks including writing and editing documents, generating images, conducting research, automating repetitive tasks and more. AI productivity software, along with AI PCs, are the products that are bringing AI capabilities to the masses.
Microsoft Copilot
Google Gemini

Artificial Intelligence: Infrastructure
Businesses and organizations are rapidly expanding their use of AI. Building, deploying and running AI and machine learning applications, however, takes a lot of compute horsepower and the ability to process huge amounts of data. That’s boosting demand for powerful AI hardware in data centers and the cloud. Systems that support AI initiatives are expected to provide high levels of performance and scalability.
Dell PowerEdge R760xa
Lenovo ThinkSystem SR780a V3
Supermicro AS-4125GS-TNHR2-LCC

Big Data
Data volumes continue to explode and the global “datasphere” – the total amount of data created, captured, replicated and consumed – is growing more than 20 percent a year and is expected to reach approximately 291 zettabytes in 2027, according to market researcher IDC.
But wrangling all that data is a major challenge for businesses and that’s fueling demand for a range of big data tools to help businesses access, collect, manage, move, transform, govern and secure data.
Cloudera Open Data Lakehouse
Databricks Data Intelligence Platform
HPE Ezmeral Data Fabric Software
Microsoft Intelligent Data Platform
Snowflake Data Cloud
Starburst Galaxy

Business Applications
Business applications, including Enterprise Resource Planning (ERP) and financial management software, are the operational backbone for many businesses and organizations. ERP applications are the tools they use to automate and manage their business processes including accounting and finance, HR, supply chain and procurement, manufacturing, and more.
Epicor ERP
Oracle NetSuite
Microsoft Dynamics 365
Sage Intacct
SAP S/4HANA
Syspro ERP

Business Intelligence and Data Analytics
Many businesses and organizations are deriving huge value and competitive advantages from data generated by their own IT systems, collected through customer transactions and acquired from outside sources.
Businesses analyze data to gain insights about markets, their customers and their own operations. They are using the data to fuel digital transformation initiatives. They are even using it to support new data-intensive services or packaging it into data products.
Amazon Redshift
Domo Data Experience Platform
Google Cloud BigQuery
MicroStrategy ONE
Qlik Sense
Salesforce Tableau

Data Protection, Management and Resiliency
Data is the lifeblood of the modern enterprise. Data that’s lost or unavailable, either due to system failure, a disastrous event like a fire or earthquake, human error or a cyberattack, can cause major disruptions.
Data resilience and protection systems and services help businesses protect and maintain access to data, and identify, detect, respond and recover from data-destructive events.
Cohesity Data Cloud
Commvault Cloud Powered by Metallic AI
Dell PowerProtect Data Manager Appliance
HYCU R-Cloud
Rubrik Security Cloud
Veeam Data Platform

Edge Computing and Internet of Things
Efforts to bring applications and data processing closer to data sources is driving the proliferation of local edge servers and IoT devices. That, in turn, is driving demand for products to better manage and support increasingly distributed computing networks.
The global market for edge computing hardware, software and services is expected to grow at a CAGR of 15.7 percent to $111.3 billion by 2028, according to Markets and Markets.
Eaton iCube
HPE Edgeline
IBM Edge Application Manager
Red Hat Device Edge
Scale Computing Autonomous Infrastructure Management Engine (AIME)
Schneider Electric EcoStruxure Micro Data Center R-Series

Hybrid Cloud Infrastructure
Hybrid cloud infrastructure combines cloud-based (often Infrastructure-as-a-Service) resources with on-premises/private cloud IT systems, working together and sharing applications and data to provide businesses with the flexibility and scalability they need to support distributed business workloads and processes. A report from Allied Market Research says the global hybrid cloud market was $96.7 billion in 2023 and will reach $414.1 million by 2032.
Dell Technologies Apex Hybrid Cloud
HPE GreenLake
IBM Hybrid Cloud
NetApp Hybrid Cloud
Nutanix Hybrid Multicloud
VMware Cloud Foundation

MSP Platforms
Managed services have been one of the fastest growing segments of the IT channel as more businesses, organizations and government entities rely on MSPs to manage their IT infrastructure and end-user systems.
That’s boosting demand for MSP platforms, including the remote monitoring and management tools, professional services automation systems and other tools that MSPs rely on to do their jobs.
Atera
ConnectWise Asio Platform
HaloPSA
Kaseya 365
N-able Cloud Commander
Syncro Platform

Networking – Enterprise
Networking hardware, including routers, switches, hubs and bridges, have long been a mainstay of the channel. Today channel companies offer networking solutions and services that span data center and cloud networks, campus LAN and WAN, Network-as-a-Service, network management and automation, and network security systems.
Cisco Networking Cloud
HPE Aruba Networking Enterprise Private 5G
Juniper AI-Native Networking Platform
Nile NaaS
Prosimo AI Suite for Multi-Cloud Networking

Networking – Wireless
Wireless networks are key to making computing, communications and collaboration ubiquitous whether in the home, throughout offices and other workspaces, in manufacturing and industrial plants, and across large venues such as conference facilities and sports stadiums. Wi-Fi 7 is the seventh generation of wireless technology offering faster speeds and improved connectivity and capacity.
Extreme Networks AP5020 universal Wi-Fi 7 access point
Fortinet FortiAP 441K Wi-Fi 7 access point
HPE Aruba Networking Wi-Fi 7 access point
Zyxel Wi-Fi 7 access point

Power Protection and Management
Power protection and management systems and appliances are a critical component for protecting critical IT infrastructure and keeping data centers up and running in the event of extreme events. The product category includes technology for monitoring and managing power usage, protecting IT systems against electricity surges, and providing backup in the event of power failures.
CyberPower PFC Sinewave 1U UPS
Eaton 9PX 6kVA Lithium-Ion UPS
Schneider Electric Easy UPS 3-Phase 3M Advanced
Vertiv Liebert GXT5 Lithium-Ion UPS

Processors – CPUs
CPU semiconductors are the processing engines that power servers, laptop and desktop PCs, and mobile devices. Intel was long-dominant in the CPU space, but rival AMD has developed highly competitive products in recent years. Apple, meanwhile, has been developing its own “silicon” for its Mac, iPad and iPhone devices.
AMD Ryzen Pro 8040 Series
Intel Core Ultra Series
AmpereOne
Apple M3
Qualcomm Snapdragon X Elite

Processors – GPUs
Graphics processing units or GPUs are a specialized processor originally developed to accelerate the performance of computer graphics. But they are increasingly being designed into IT systems for high-performance computing tasks such as data science and AI applications. Nvidia has been a pioneer in developing GPUs for a broad range of applications, but rivals Intel and AMD have been expanding their GPU product portfolios.
AMD Instinct MI300X
Intel ARC A570M
Nvidia H200

Public Cloud Platforms
Public cloud platforms provide organizations with an alternative to building and managing their own IT systems and data centers. Public cloud operators also offer their own portfolios of cloud products and services such as application hosting, data storage and analytics. The value proposition is that cloud services reduce capital spending for businesses and provide more flexibility by allowing them to scale IT usage up or down as needed.
Amazon Web Services
CoreWeave Cloud
Google Cloud Platform
Microsoft Azure
Oracle Cloud Infrastructure (OCI)
Snowflake Data Cloud

SD-WAN
SD-WAN is a software-defined approach to managing and optimizing the performance and security of wide area networks that connect users to applications and cloud platforms. SD-WAN benefits include improved performance and connectivity, enhanced security, simplified management and lower operating costs.
Cisco Catalyst SD-WAN
Extreme Networks Extreme Cloud SD-WAN
Fortinet Secure SD-WAN
HPE Aruba EdgeConnect SD-WAN
Palo Alto Networks Prisma SD-WAN
Zscaler Zero Trust SD-WAN

Security – Cloud and Application Security
The rapid growth of cloud computing has created new security challenges for businesses and organizations as they adopt and utilize distributed IT infrastructure and applications that lie both within and outside of the corporate firewall. Cloud and application security technologies provide a range of capabilities including protection against internal and external threats, identity and access control, and network visibility and management.
CrowdStrike Falcon Cloud Security
F5 Distributed Cloud Services Web Application Scanning
Orca Cloud Security Platform
Palo Alto Networks Prisma Cloud
SentinelOne Singularity Cloud Security
Tenable Cloud Security
Wiz Cloud Security Platform

Security – Data
Protecting data has become a top priority for with the proliferation of ransomware attacks and other cybercrimes. The media is filled with headlines of businesses, hospitals, insurance companies, government entities and other organizations who find themselves blocked from accessing their own critical data or discover that their data has been stolen and is for sale on the Dark Web.
Data security tools provide a range of functions to accomplish their task including data encryption, user authentication and controlling access to data, monitoring data in real time to detect and respond to unusual activity, manage compliance with data governance requirements, and more.
ForcePoint ONE Data Security
IBM Guardium Data Protection
Proofpoint Information Protection
Rubrik Security Cloud
Wiz DSPM
Zscaler Data Protection

Security – Email and Web Security
Email and other internet communications are perhaps the most common vector for cybersecurity attacks including spam, phishing, malware delivery and system takeover.
Email security products, including antivirus/antimalware tools, spam filters, authentication and encryption systems, are a key component of a business’s overall IT security strategy. Web security tools help prevent attacks against websites.
Abnormal Security Platform
Akamai API Security
Barracuda Email Protection
Cloudflare Application Security
Mimecast Advanced Email Security
Proofpoint Threat Protection

Security – Endpoint Protection
Businesses can be most vulnerable through the endpoint devices (desktop PCs, laptops, smartphones) employees use for everyday work, along with embedded devices, IoT and other edge computing systems. This is especially true with today’s post-pandemic hybrid work practices where many of these devices now sit outside of corporate security perimeters.
Products in this technology category include antivirus and antimalware tools, endpoint protection platforms, and endpoint detection/response and extended detection/response software.
CrowdStrike Falcon Insight XDR
Huntress Managed EDR
SentinelOne Singularity XDR
Sophos Intercept X
ThreatLocker Protect
Trend Micro Trend Vision One

Security – Identity and Access Management
Businesses use identity and access management tools, backed by related policies and processes, to manage digital identities and control access to corporate IT systems and data. IAM tools, a foundational cybersecurity technology for zero trust IT initiatives, are key to identifying, authenticating and authorizing users – including employees and trusted business partners – while protecting against unauthorized access.
CyberArk Workforce Identity
Ping Identity PingOne for Workforce
Okta Workforce Identity Cloud
Microsoft Entra ID
OpenText NetIQ Identity Manager
SailPoint Identity Security Cloud

Security – Managed Detection and Response
Many businesses and organizations, especially SMBs, lack in-house cybersecurity expertise. Many turn to managed detection and response (MDR) providers for outsourced services that monitor clients’ IT systems, endpoints, networks and cloud environments on a 24/7 basis and respond to detected cyberthreats. MDR offerings generally combine cybersecurity teams, advanced threat detection tools and security operations center functions.
Arctic Wolf MDR
CrowdStrike Falcon Complete Next-Gen MDR
Huntress MDR for Microsoft 365
SentinelOne Singularity MDR
Sophos MDR
ThreatLocker Cyber Hero MDR

Security – Network
Businesses face multiple challenges to keep their network infrastructure secure and operational. Potential threats include distributed denial-of-service (DDoS) attacks, network-based ransomware, insider threats and password attacks, to name a few.
Securing corporate networks, meanwhile, has become all the harder with the move to remote work and the increasing use of cloud applications.
The specific technology components of a sound network security strategy include firewalls, SASE (secure access service edge) systems, network access control technology, antivirus and antimalware software, intrusion prevention systems, and tools for cloud, application and email security.
Cisco Hypershield
Fortinet FortiGate
SonicWall Cloud Secure Edge
Sophos XGS Firewall
ThreatLocker CyberHero MDR
WatchGuard ThreatSync+ NDR

Security – Security Operations Platform
Security Operations links security and IT operations teams to improve an organization’s cybersecurity posture across IT systems, networks and applications. SecOps software incorporates tools and processes to provide a unified approach to cybersecurity to help identify security threats and vulnerabilities, reduce risks and respond more quickly to security incidents.
Arctic Wolf Security Operations
CrowdStrike Falcon Next-Gen SIEM
Google Security Operations
Microsoft Sentinel
Palo Alto Networks Cortex XSIAM 2.0
Splunk Enterprise Security

Security – Security Access Service Edge
Security Access Service Edge platforms combine network and security services into a single cloud-based system – a critical concept for managing today’s multi-cloud environments and hybrid workforces. SASE can include multiple functions including zero-trust network access, secure web gateways, cloud access security brokers and firewall services to provide centralized control over identity and access policies and operations.
Cato SASE Cloud Platform
Cisco Secure Access
Fortinet FortiSASE
Netskope One SASE
Palo Alto Networks Prisma SASE
Zscaler Zero Trust SASE

Storage – Enterprise
Data volumes continue to explode and the global “datasphere” – the total amount of data created, captured, replicated and consumed – is growing more than 20 percent a year and is expected to reach approximately 291 zettabytes in 2027, according to market researcher IDC.
That data, of course, must be stored somewhere. While more data is being stored on cloud platforms, many businesses and organizations maintain on-premises data storage systems – either standalone or as part of a hybrid system – for a number of reasons including data security and governance and reduced internet costs.
Dell PowerStore
HPE Alletra Storage MP
Infinidat SSA Express
NetApp AFF C-Series
Pure Storage FlashArray//E
Quantum ActiveScale Z200 Object Storage

Storage – Software-Defined
Software-defined storage technology uncouples or abstracts storage management and provisioning from the underlying hardware. One benefit is that pools of physical storage resources can be managed as a single system, helping to reduce costs compared to traditional storage area network (SAN) and network-attached storage (NAS) systems.
DDN Infinia
Dell PowerFlex
HPE GreenLake for Block Storage
IBM Software-Defined Storage
Pure Storage Purity

Unified Communications and Collaboration
Unified communications (including Unified Communications as a Service) integrates VoIP, instant messaging, video conferencing and other communication capabilities through a single interface. UCC has taken on increased importance with more employees working from home and other remote locations.
UCC is a long-time channel mainstay with solution providers implementing, maintaining and operating UCC systems. The global UCC market is expected to reach $141.6 billion by 2027, according to Markets and Markets.
Cisco Webex
Intermedia Unite
Microsoft Teams
Nextiva Unified Customer Experience Platform
RingCentral RingCX

AMD raises Instinct GPU sales forecast again due to high AI demand

AMD CEO Lisa Su says the $500 million upgrade to the company’s Instinct GPU 2024 sales forecast was based on the completion of ‘some important customer milestones,’ such as meeting reliability requirements in data centers and on some AI workloads Optimizing chips.

AMD said it now expects to earn more than $5 billion from sales of its Instinct data center GPUs this year due to high demand from hyperscalers like Meta and Microsoft, as well as other customers who use the chips to run AI workloads. Is.
This marks an upgrade from AMD’s $4.5 billion forecast for Instinct GPU sales july And it issued guidance of $4 billion april The Santa Clara, Calif.-based chip designer wants to challenge Nvidia, whose revenue has more than doubled year-on-year to $30 billion. last quarter aloneMainly due to its continued dominance of the AI ​​computing market.
[Related: AMD Says Instinct MI325X Bests Nvidia H200, Vows Huge Uplift With MI350]
AMD revealed the pace of sales of its Instinct GPUs during its third-quarter earnings call Tuesday, where the company’s CEO, Lisa Su, said, “The decline in gaming and embedded product sales more than offset the decline in data center and “Client processor sales have increased significantly.”
As a result, AMD’s third quarter revenue was $6.8 billion, up 18 percent compared to the same period last year and up 17 percent sequentially. The company’s gross margin expanded 3 points year over year to 50 percent and its gross profit rose 24 percent year over year to $3.4 million, while its earnings per share rose 161 percent to 47 cents.
It expects to have revenue of about $7.5 billion in the fourth quarter, which would be an increase of about 22 percent year-over-year and 10 percent sequential growth.
“This is an incredibly exciting time for AMD as the breadth of our technology and product portfolio, coupled with our deep customer relationships and the diversity of the markets we address, provides us with a unique opportunity as we execute our next arc.” and make AMD the ultimate end-to-end AI leader,” Su said on the earnings call.
AMD’s third-quarter revenue exceeded Wall Street expectations by nearly $100 million, while its adjusted earnings per share were in line with analyst estimates. However, its fourth-quarter revenue forecast fell slightly short of Wall Street expectations.
The company’s share price was down more than 7.5 percent in after-hours trading Tuesday.

breakdown of business unit

AMD said data center revenue in the third quarter more than doubled from the same period a year earlier, rising 122 percent year-over-year to a record $3.5 billion, thanks to the growth of EPYC CPUs along with Instinct GPUs. Refers to a “strong ramp” of shipments. Sales. According to the company, this was 25 percent more revenue than the previous quarter.
Customer revenue, on the other hand, grew 29 percent year-on-year and 26 percent sequentially to $1.9 billion, primarily due to “strong demand” for its Zen 5-based Ryzen processors.
However, the chip designer suffered a loss when it came to its embedded revenue, which declined 25 percent year-on-year to $927 million due to customers working through existing inventory, but recovered sequentially due to an improvement in demand. There was an increase of 8 percent. It saw gaming revenue decline 69 percent year-on-year and 29 percent sequentially, primarily due to lower semi-custom sales.

Why AMD raised Instinct GPU forecast

Su said the $500 million upgrade to AMD’s Instinct GPU 2024 sales forecast was based on the completion of “some important customer milestones,” such as meeting reliability requirements in data centers and optimizing chips on certain AI workloads. This helped the company increase Instinct shipments faster than before.
Two key customers that helped fuel sales in the third quarter were Microsoft and Meta, both of which expanded their usage AMD’s MI300X GPU According to Su, for internal workload.
While Microsoft is using MI300X to run several Copilot services powered by OpenAI’s GPT-4 AI model, Meta has widely deployed MI300X to power its large-scale predictive infrastructure, including specialized This includes using the MI300X to serve all live traffic for their most demanding llamas. [405-billion-parameter] Marginal model,” she said.
The company also saw the availability of MI300X public cloud instances expand among Microsoft, Oracle Cloud, and smaller cloud providers, while “many startups and industry leaders” such as Databricks and Essential AI saw “strong” adoption of such instances.
Su said AMD has also received positive feedback from customers regarding its pending $4.9 billion acquisition of the cloud computing specialist ZT System,
The acquisition, which is scheduled to close in the first quarter of next year, is expected to bring AMD AI infrastructure to scale while giving hyperscaler customers “customized board and module designs to server vendors for a wide range of different enterprise solutions.” Will be made capable of rapid deployment. According to the CEO.
Also, Su said, AMD’s plan to sell ZT Systems’ data center infrastructure manufacturing business “has received significant interest from multiple parties to date.”
Looking ahead to the next two quarters, Su said AMD customer engagement around Instinct GPUs is “really growing quite well.”
“Our largest cloud customers are broadening the set of workloads running on AMD Instinct, and we are also engaged with many large cloud and enterprise customers who are actively working with us and optimizing their workloads. Are,” she said. ,

Outrun By Nvidia, Intel Pitches Gaudi 3 Chips For Cost-Effective AI Systems

In presentations and in interviews with CRN, Intel executives elaborate on the chipmaker’s strategy to market its Gaudi 3 accelerator chips to businesses who need cost-effective AI systems backed by open ecosystems after CEO Pat Gelsinger admitted that Intel won’t be ‘competing anytime soon for high-end training’ against rivals like Nvidia.

Intel said its strategy for Gaudi 3 accelerator chips will not focus on chasing the market for training massive AI models that has created seemingly unceasing demand for Nvidia’s GPUs, turned the rival into one of the world’s most valuable companies and led to a new class of expensive, energy-chugging data centers.

Instead, the semiconductor giant believes its Gaudi 3 chips will find traction with businesses who need cost-effective AI systems for training and, to a much greater extent, inferencing smaller, task-based models and open-source models.
[Related: Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown]
Intel outlined its strategy for Gaudi 3 when it announced last month that the accelerator chip, a key product in CEO Pat Gelsinger’s turnaround plan, will debut in servers from Dell Technologies and Supermicro in October. General availability is expected later in the fourth quarter, a delay from the third-quarter release window Intel gave in April.
Hewlett Packard Enterprise is expected to follow with its own Gaudi 3 system in December. System availability from other OEMs, including Lenovo, was not disclosed.
On the cloud front, Gaudi 3 will be available through services hosted on IBM Cloud early next year and sooner on Intel Tiber AI Cloud, the chipmaker’s recently rebranded cloud service that is meant to support commercial applications.
At a recent press event, Intel homed in on its competitive messaging around Gaudi 3, saying that it delivers a “price performance advantage,” particularly around inference, against Nvidia’s H100 GPU, which debuted in 2022, played a major role in Nvidia’s ascent as a data center vendor and was succeeded earlier this year by the memory-rich H200.
When it comes to an 8-billion-parameter Llama 3 model, Gaudi 3 is roughly 9 percent faster than the H100 but provides 80 percent better performance-per-dollar, according to calculations by Intel. For a 70-billion-parameter Llama 2 model, the chip is 19 percent faster but improves performance-per-dollar by roughly two times, the company said.
Intel has previously said that Gaudi 3’s power efficiency is on par with the H100 when it comes to inferencing large language models (LLMs) with an output of 128 tokens, but it has a performance-per-watt advantage when that output grows to 1,024 tokens. It has also said that Gaudi 3 has faster LLM inference throughput than the H200 with the same large token output. Tokens typically represent words or characters.
While Gaudi 3 was able to outperform the H100 and H200 on these two LLM inference throughput tests, the chip’s overall throughput for floating-point operations across 16-bit and 8-bit formats fell short of the H100’s capabilities.
For bfloat16 (FB16) and 8-bit floating-point precision matrix math, Gaudi 3 can perform 1,835 trillion floating point operations per second (TFLOPS) for each format while the H100 can reach 1,979 TFLOPS for BF16 and 3,958 TFLOPS for FP8.
But even if the chipmaker can claim any advantage over the H100 or H200, Intel has to contend with the fact that Nvidia has accelerated to an annual chip release cadence, and this means the rival plans to debut by the end of the year its next-generation Blackwell GPUs, for which Nvidia has promised will be more powerful and efficient.
Intel is also facing another rival that has become increasingly competitive in the AI computing space: AMD. The opposing chip designer last week said its forthcoming Instinct MI325X GPU can outperform Nvidia’s H200 on inference workloads and vowed that its next-generation MI350 chips will improve performance by magnitudes.

Why Intel Thinks It Can Find A Way Into The AI Chip Market

Knowing the battle ahead, Intel is not intending to go head-to-head with Nvidia’s GPUs in the race to enable the fastest AI systems for training massive AI models, like OpenAI’s 1.8 trillion-parameter GPT-4 Mixture-of-Experts model.
In an interview with CRN, Anil Nanduri, the head of Intel’s AI acceleration office, said purchasing decisions around infrastructure for training AI models so far have been mainly made based on performance and not cost.
That trend has largely benefited Nvidia so far, and it has allowed the company to build a groundswell of support among AI developers. In turn, developers have made significant investments in Nvidia’s full stack of technologies to build out their applications, raising the bar for moving development to another platform.
“And if you think in that context, there is an incumbent benefit, where all the frontier model research, all the capabilities are developed on the de facto platform where you’re building it, you’re researching it, and you’re, in essence, subconsciously optimizing it as well. And then to make that port over [to a different platform] is work,” Nanduri said.
It may make sense, at least for now, for hyperscalers like Meta and Microsoft to invest significant sums of money in ultrapowerful AI data center infrastructure to push cutting-edge capabilities without an immediate need to generate profits. OpenAI, for instance, is expected to generate $5 billion in losses—some of which is tied to services—this year on $3.6 billion in revenue, CNBC and other publications reported last month.
But many businesses cannot afford to make such investments and accept such losses. They also likely don’t need massive AI models that can answer questions on topics that go far beyond their focus areas, according to Nanduri.
“The world we are starting to see is people are questioning the [return on investment], the cost, the power and everything else. This is where—I don’t have a crystal ball—but the way we think about it is, do you want one giant model that knows it all?” Nanduri said.
Intel believes the answer is “no” for many businesses and that they will instead opt for smaller, task-based models that have lighter performance needs.
Nanduri said while Gaudi 3 is “not catching up” to Nvidia’s latest GPU from a head-to-head performance perspective, the accelerator chip is well-suited to enable economical systems for running task-based models and open-source models on behalf of enterprises, which is where the company has “traditional strengths.”
“For the enterprises where we have a lot of strong relationships, they’re not the first rapid adopters of AI. They’re actually very thoughtful about how they’re going to deploy it. So I think that’s what’s driving us to this assessment of what is the product market fit and to our customer base, where we traditionally have strong relationships,” he said.
Justin Hotard, an HPE veteran who became leader of Intel’s Data Center and AI Group at the beginning of the year, said he and other leaders settled on this strategy after hearing from enterprise customers who want more economical AI systems, which has helped inform Intel’s belief that there could be a big market for such products.
“We feel like where we are with the product, the customers that are engaged, the problems we’re solving, that’s our swim lane. The bet is that the market will open up in that space, and there’ll be a bunch of people building their own inferencing solutions,” he said in a response to a CRN question at the press event.
At a financial conference in August, Gelsinger admitted that the company isn’t going to be “competing anytime soon for high-end training” because its competitors are “so far ahead,” so it’s betting on AI deployments with enterprises and at the edge.
“Today, 70 percent of computing is done in the cloud. 80-plus percent of data remains on-prem or in control of the enterprise. That’s a pretty stark contrast when you think about it. So the mission-critical business data is over here, and all of the enthusiasm on AI is over here. And I will argue that that data in the last 25 years of cloud hasn’t moved to the cloud, and I don’t think it’s going to move to the cloud,” he said at the Deutsche Bank analyst conference.

Intel Bets On Open Ecosystem Approach

Intel also hopes to win over customers with Gaudi 3 by embracing an open ecosystem approach across hardware infrastructure, software platforms and applications, which executives said contrasts with Nvidia’s “walled garden” strategy.
Saurabh Kulkarni (pictured), vice president of product management in Intel’s Data Center and AI Group, said customers and partners will have the choice to scale Gaudi 3 from one system with eight accelerator chips, all the way to a 1,024-node cluster with over 8,000 chips, with several configuration options in between, all meant for different levels of performance.
To enable the hardware ecosystem, Intel is replicating its Xeon playbook by providing OEMs with reference architectures and designs, which “can then be used as blueprints for our customers to replicate and build infrastructure in a modular fashion,” he said.
These reference architectures will be backed by a variety of open standards, ranging from Ethernet and PCIe for connectivity to DAOS for distributed storage and SYCL for programming, which Intel said helps prevent vendor lock-in.
When it comes to software, Intel executive Bill Pearson said the company’s open approach means that partners and customers can choose from a variety of tools from different vendors to address every software need for an AI system. He contrasted this with Nvidia’s approach, which has been to create many of its own tools that only work with Nvidia GPUs.
“Rather than us creating all the tools that a customer or developer might need, we rely on our ecosystem partners to do that. We work with them, and we help the customers then choose the one that makes sense for their particular enterprise,” said Pearson, who is vice president of software in the Data Center and AI Group.
A key aspect of this open ecosystem software approach is the Open Platform for Enterprise AI (OPEA), a group started earlier this year under the Linux Foundation that is meant to serve as the foundation for microservices that can be used for AI systems. Members of the group range from chip companies like AMD, Intel and Rivos, to a wide variety of software providers, including virtualization providers like VMware and Red Hat as well as AI and machine learning platforms such as Domino, Clarifai and Intel-backed Articul8.
“When we look at how to implement a solution leveraging those microservices, every component of the stack has multiple offers, and so you need to be very specific about what’s going to work best for you. Is there a preference that you have? Is it a purchasing agreement? Is it a technical preference? Is there a relationship preference?” said Pearson.
“And then customers can choose the pieces, the components, the ingredients that are going to make sense for their business. To me, that’s one of the best things about our open ecosystem, is that we don’t hand you the answer. Instead, we give you the tools to go and select the best answer,” he added.
Key to Intel’s software approach for AI systems is a focus on retrieval-augmented generation (RAG), which allows LLMs to perform queries against proprietary enterprise data without creating the need to fine-tune or re-train those models.
“This finally enables organizations to customize and launch GenAI applications more quickly and more cost effectively,” Pearson said.
To help customers set up RAG-based AI applications, Intel plans to introduce later this year Intel AI for Enterprise RAG, a catalog of solutions developed by Intel and third parties that is set to debut before the end of the year. These solutions address use cases ranging from code generation and code translation, to content summarization and question-and-answer.
Pearson said Intel is “uniquely positioned” to address challenges faced by businesses in deploying RAG-based AI infrastructure with technologies developed by Intel and partners, which start with validated servers equipped with Gaudi and Xeon chips from OEMs and includes software optimizations, vector databases and embedding models, management and orchestration software, OPEA microservices, and RAG software.
“All of this makes it easy for enterprise customers to implement solutions based on Intel AI for Enterprise RAG,” he said.

Channel Will Be ‘Key’ For Gaudi 3 Rollout

In an interview with CRN last week, Greg Ernst, corporate vice president and general manager of Intel’s Americas sales organization and global accounts, said channel partners will be critical to getting Gaudi 3-based systems in the hands of customers.
For Intel to get to this point, Ernst said the chipmaker needed Gaudi 3 to reach a broad range of support from server vendors that “partners like World Wide Technology can really rally around.” He added that Intel has “done a lot of learning with the partners of how to sell the product and implement product support.”
“Now we’re ready for scale, and the partners are going to be key for that,” he said.
Rohit Badlaney, general manager of IBM Cloud product and industry platforms, told CRN that the company’s “build” independent software vendor (ISV) partners, value-added distributors and global systems integrators are three major ways IBM plans to sell cloud services based on Gaudi 3, which will largely be focused around its watsonx AI platform.
“We’ve got a whole sales ecosystem team that’s going to focus on build ISVs, both embedding and building with our watsonx platform, the same kind of efforts going on now with our Red Hat developer stack,” he said at Intel’s press event last month.
Badlaney said IBM Cloud has tested Intel’s “price performance advantage” claims for Gaudi 3 and is impressed with what they have found.
“As we look at the capability in Gaudi 3, specifically for our watsonx data and AI platform, it really differentiated in our testing from a cost-performance perspective. So the first set of use cases that we’ll apply it to is inferencing around our own branded models and some of the other models that we see,” he said.
Vivek Mohindra, senior vice president of corporate strategy at Dell, said by adopting Gaudi 3 into its PowerEdge XE9680 portfolio, his company is giving partners and customers an alternative to systems with accelerator chips from Intel’s rivals. He added that Dell’s Omnia software for managing high-performance computing and AI workloads works well with the OPEA microservices, giving enterprises an “easy button” to deploy new infrastructure.
“It gives customers a choice as well, and then on software, with our Omnia stack being interoperable with [Intel’s] OPEA, that provides for an immense ability for the customers to adopt and scale it relatively easily,” he said at Intel’s press event.
Alexey Stolyar, CTO of Northbrook, Ill.-based systems integrator International Computer Concepts, told CRN that his company started taking high-level training courses around Gaudi 3 and that he can see the need for cost-effective AI systems enabled by such chips, mainly because of how much power it takes to train or fine-tune massive models.
“What you’re going to find is that a lot of the world is going to focus on smaller, more efficient, more precise models than these huge ones. The huge ones are good at general tasks, but they’re not good at very specific tasks. The enterprises are going to start developing either their own models or fine-tune specific open-source models, but they’re going to be smaller and they’re going to be more efficient,” he said.
Stolyar said while International Computer Concepts hasn’t started talking to customers proactively about Gaudi 3 systems, one customer has already approached his company developing a Gaudi 3 system for a turnkey appliance the customer plans to sell for specific workloads because of benchmark where the chip has shown to perform well.
However, the solution provider executive said he isn’t sure how big of an opportunity Gaudi 3 represents yet and added that Intel’s success will heavily rely on how easy Gaudi 3 systems are to use in relation to those powered by Nvidia chips and software.
“I think customers want alternatives. I think having good competition is good, but it’s not going to happen until that ease of use is there. Nvidia has been doing this for a while. They’ve been fine-tuning their software packages and so on for a long time in that ecosystem,” he said.
A senior leader at a solution provider told CRN that his company’s conversations with Intel representatives have given him the impression that the chipmaker isn’t seeking to take Nvidia head-on with Gaudi 3 but is instead hoping to win a “percentage” of the AI market.
“They’ve been talking about Gaudi 3 for a long time: ‘Hey, this is going to be the thing for us. We’re going to compete.’ But then I think they’re also sort of coming in with tempered expectations of like, ‘Hey, let’s compete in a percentage of this market. We’re not going to take on Nvidia, per se, head-to-head, but we can chew away at some of this and give customers options. Let’s pick out five customers and go talk to them,” said the executive, who asked to not be identified to speak frankly about his work with Intel.
The solution provider leader said he does think there could be a market for cost-effective AI systems like the ones that are powered by Gaudi 3 because he has heard from customers who are becoming more conscious about high AI infrastructure costs.
“In some ways, you’re conceding that somebody else has already won when you take that approach, but it’s also pretty logical to say, ‘Hey, if it does all these things, you would be a fool not to look at it, because it’ll save you money and power and everything else.’ But that’s not a take-over-the-world type of strategy,” he said.

‘Making sure x86 stays x86’

‘We support x86. x86 is very important to us. We support it for PC, Workstation, Data Center. “And so the fact that the architecture was fragmenting is not good for the industry, so what they’re doing is to me,” Nvidia CEO Jensen Huang told CRN about the formation of the Intel-AMD ecosystem advisory group. Like it.’

Leaders of the world’s three largest computer chip makers endorsed Intel and AMD’s x86 partnership at Lenovo Tech World on Tuesday, highlighting the importance of keeping the venerable x86 design running smoothly.
Intel and AMD on Tuesday unveiled a new x86 ecosystem advisory group that will include several tech giants as members — including Microsoft, Dell Technologies, HP Inc., Hewlett Packard Enterprise, Lenovo and Google, as well as Linux Producer Linus Torvalds is also involved.
Jensen Huang, Nvidia’s founder, chairman and CEO, told CRN that the x86 architecture is becoming fragmented, which risks affecting the definition of the term and is not good for business.
“We support x86. X86 is very important to us. We support it for PC, Workstation, Data Center. And so the fact that architecture was becoming fragmented is not good for the industry. So I like what they’re doing. Pulling it together and making sure x86 stays x86. Otherwise, it’s not x86 anymore so I think what they’re doing is really awesome.”
The designer of the x86 chip, Intel CEO Pat Gelsinger (pictured above left), strongly defended the design and the partnership with AMD when appearing on stage, Lenovo Chairman and CEO Yuanqing Yang introduced him (above right Pictured).
“You all probably have a front row seat to the first partnership with Intel and AMD,” Gelsinger told the crowd. “Some have said ‘Is x86 done?’ I will tell you that the rumors of its death are greatly exaggerated. We are alive and well and x86 is thriving. [Su,chair and CEO of AMD] Who would have thought to agree on something?”
A few minutes later, on the same stage, Su told the audience that his company’s partnership with Intel reflects the unpredictable nature of the technology market.
“Pat was on stage. He mentioned our x86 advisory group. It’s something that tells you what a unique time this is in technology,” she said. “At the end of the day, what we are trying to do is accelerate computation and accelerate adoption of computation. X86 has been a leading leader and architect over the last 40 years. The idea is that by bringing together all these founding members, AMD and Intel can really accelerate the pace of innovation going forward.”
In their joint announcement, Intel and AMD said the advisory group aims to “shape the future of x86 and foster developer innovation through a more unified set of instructions and architectural interfaces.” “This is expected to increase compatibility, predictability, and stability across x86 product offerings,” the two companies said.
As CRN reported on Tuesday, Intel and AMD’s x86 CPU businesses face a growing threat from the Arm instruction set architecture, which has enabled the consumer tech giant Applemobile chip designer Qualcomm And cloud computing giants like Amazon Web Services, Microsoft And Google To design their own CPUs for the PC and cloud markets. Meanwhile, another mobile chip designer, MediaTek publicly stated It intends to introduce Arm-based CPUs for Windows PCs Allegedly Working with Nvidia to do this.
This has intensified the competition for Intel and AMD. For example, both companies have responded to the high core density and high efficiency of Arm-based server CPUs with their own high-core-count, efficiency-focused processors last year, with AMD debuting its own. EPYC Bergamo Chips last year and Intel has recently launched it Xeon 6 e-Core Chips.
Here are Gelsinger and Su’s comments from the Lenovo conference as well as what Huang told CRN about the new x86 advisory group.

Intel’s Pat Gelsinger

Let’s start with the AI ​​era. When I think about it, we are entering one of the most exciting eras of innovation. For someone like me who has been participating in technology for over 40 years, to say that this is one of the most exciting eras is pretty profound. I think of it like the Internet when it first came out. I was on the internet today. Very good! I used it twice today. Very good! Now my kids use it seven times a second.
Think about a similar impact of AI. From ‘Wow, I did a GenAI query’ to ‘I did something on ChatGPT’ to becoming mainstream in everything we do. It will fundamentally change the relationship between people and technology.
Because of that, we think about every company becoming an AI company. Every device becomes an AI device. Everyone with a PC has the power of AI at their fingertips.
I think of that AI device as fundamentally changing the cost economics of AI. I don’t go to the cloud to get AI. It’s right here on my device. I don’t need the networking costs. It’s right here on my device. This is how we think about the AI ​​PC category. A defining moment.
It’s a bit like Centrino. Remember when Centrino came out? ‘Wow, I have Wi-Fi on my PC. I don’t have any hotspot. What should I do with this thing?’ Suddenly, productivity moved from a defined use case to the ‘Internet’ and all connectivity was opened up and how revolutionary was that?
This is more important. And [research firm] IDC estimates that half the market will be AI PCs by next year and 100 percent by the end of the decade.
We’ve already shipped 20 million AI PC devices, building an ecosystem that’s second to none. We couldn’t imagine doing this more prominently and more aggressively than our long-time partners at Lenovo and my long-time friend “YY.”
We’ve been through a lot together in over 20 years. … We’ve done the engineering. We have innovated and we have challenged each other. The x86 architecture is at its core and has been the cornerstone of our partnership for decades.
Some people have said, ‘Is x86 done?’ I will tell you that the rumors of its death have been greatly exaggerated.
We’re alive and well, and x86 is thriving. We consider this to be one of the most important times of innovation ahead of us. We see the x86 architecture as the foundation of computing for decades to come, undergoing optimization, expansion, and scalability. The opportunities that AI will present and our ecosystem are strengthening and growing.
Intel and AMD announce x86 ecosystem advisory group. And for this Lisa and Pat agreed on something.
Who would have thought?
We really think this is the right time. What better platform to do this than Lenovo Stage? I’m here with YY. Lisa will be arriving in a minute. The advisory group brings together leaders from across the ecosystem to shape the future of x86 to simplify software development. To ensure interoperability and interface consistency. Providing standard architecture, tools, instruction sets to developers. Clear view of the future. The value of a thriving ecosystem is greater than ever. The advantage in this age of AI and 3-D architecture of chips ushers in a new class of innovation around systems and new workloads.
And we welcome Lenovo to that group and others. Tim Sweeney [founder and CEO of Epic Games]Broadcom. VMware. Google. Microsoft. Oracle. Red hat. The advisory group sees this fundamental role played by x86 as fundamental to the future of the data center.
Together we will continue to drive a flexible, open, and cost-effective x86 future. It is at the heart of our networking solutions. It is at the heart of our data center solutions. It also enables us to actually say, ‘AI everywhere.’

AMD’s Lisa Su

I have been in this industry for more than 30 years. AI is truly the most important technology I have seen in my career. The most amazing part of this is that we are still in the early days, but what we are seeing is that the pace of innovation is faster than anything we have seen before. Frankly, you could say we have made more progress in the last two years than in the last 10 years.
We’ve already heard a lot about the excitement around AI and what this new era of AI actually brings. I really see this as an opportunity to bring AI to solve the world’s most important challenges.
AMD and Lenovo have a very unique partnership. So when you look at our portfolio, it’s about CPUs, GPUs, AI PCs, networking, all those elements. Together with Lenovo, we are actually creating all those solutions.
You’ve actually heard from one of our other partners. Intel. Pat was on stage. He mentioned our x86 advisory group. This is something that tells you what a unique time this is in technology.
At the end of the day, what we’re trying to do is accelerate computation and accelerate the adoption of computation. x86 has been a leading leader and architect over the past 40 years. The idea is that AMD and Intel can really accelerate the pace of innovation going forward by bringing all these founding members together.
Last week we announced our AMD Epyc processors. This is the best CPU for cloud, enterprise, and AI. We are building on our strong partnerships across the data center ecosystem, including Lenovo, to deliver these solutions to truly transform data centers.
If you look at the growth that people need, if you look at the compute power that people need, you really need the best. We love what we’re doing for Cloudflare, PayPal, and many other customers to bring the best technology to market.
The truth of the matter is that everyone wants more AI computation.

Jensen Huang of Nvidia

We support x86. x86 is very important to us. We support it for PC, Workstation, Data Center. And so the fact that architecture was becoming fragmented is not good for the industry, so I like what they’re doing. Pulling it together and making sure x86 stays x86. Otherwise, it’s not x86 anymore so I think what they’re doing is really awesome.

Five companies that won this week

For the week ending October 11, CRN will take a look at companies that have brought their ‘A’ game to the channel, including Presidio, AMD, Intel, Microsoft and the winners of the CRN 2024 Triple Crown Award.

Week ending October 11
Topping this week’s Came to Win list is solutions provider Presidio for a strategic acquisition that expands both its geographic reach and technology cross-sell opportunities.
Also making it to the list is AMD for its big step into the AI ​​PC arena with the introduction of its Ryzen AI Pro 300 processors. Rival Intel also makes the list for the launch of an AI cloud service featuring its new Gaudí 3 accelerator chips. And Microsoft is here to allow partners to sell professional services in its marketplace, a move that creates potential opportunities for solution providers looking for new ways to acquire and transact with customers.
The list includes 47 solution providers who are CRN’s 2024 Triple Crown Award winners.

Presidio expands geographic reach, enhances cross-sell opportunities with strategic acquisition
Solution Provider All-Star Presidio This Week Acquired Internetwork EngineeringA Charlotte, NC-based IT solutions provider with an extensive presence in the Southeastern US
Presidio CEO Bob Cagnazzi said his New York City-based company will now be better positioned to serve customers in the Southeastern US. “This strategic acquisition provides the right mix of synergy and growth opportunities for both organizations and aligns perfectly with our focus on investment.” in opportunities that foster growth and innovation,” Cagnazzi said in a statement.
Internetwork specializes in engineering collaboration, data center, networking and cyber security solutions.
In addition to strengthening Presidio’s scale in key markets, the acquisition provides Internetwork Engineering the opportunity to cross-sell cloud-based services and solutions including consumption, managed services and cybersecurity, which comprise Presidio’s broader portfolio.

AMD dazzles with new Ryzen AI Pro, announces upcoming Instinct chips
AMD this week called its newly launched Ryzen AI Pro 300 processors “the best AI PC platform” for businesses and said they’ll bring new levels of security and manageability to Microsoft’s CoPilot+ PCs that hit the market next year. Ready to come.
Unveiled at AMD’s Advancing AI event in San Francisco Ryzen AI Pro 300 According to the chip designer, the processors combine leadership performance and efficiency with new enterprise-level security features such as cloud bare metal recovery as well as remote management and deployment capabilities.
According to AMD, PC vendors are expected to use Ryzen AI Pro processors in more than 100 computer designs by 2025.
AMD is also playing competitive hardball with its upcoming 256-GB Instinct MI325X GPUAbout which the company said that it can outperform Nvidia’s 141-GB H200 processor in terms of AI inference workload. When it comes to training AI models, AMD said the MI325X is on par with or slightly better than the H200, the successor to Nvidia’s popular and powerful H100 GPU.
AMD also promised that the next-generation MI350 accelerator chips will significantly improve performance.

Intel launches AI cloud with Gaudí 3 chips, Inflection AI partnership
Meanwhile, Intel was making some AI and processor noise this week with the launch of its AI cloud service featuring its new Gaudí 3 accelerator chips, which the company said will power a new enterprise AI with hybrid cloud capabilities from generators. Will help outline the solution. AI startup Inflection AI.
service, called Intel Tibor AI CloudEffective at the beginning of the month, Intel represents a rebrand and expansion in scope for Tibor Developer Cloud. Launched about a year ago, the Intel Tibor Developer Cloud gives customers and partners early access to new chips like the Gaudi 3 and Xeon 6 for development purposes.
With Intel Tibor AI Cloud, the chip maker is expanding the scope of the service to production-level cloud instances that can be used for business purposes. The type of customers Intel is targeting are large AI startups and enterprises. According to Intel, in addition to providing Gaudi 3 instances, the service also includes instances powered by Gaudi 2 chips, Xeon CPUs, Core Ultra 200V CPUs, and Max series GPUs.
With the Intel Tibor AI Cloud announcement, the semiconductor giant said it is partnering with Inflexion AI for a new enterprise AI solution that is available on its cloud service and will be available as server appliances early next year.
Called Inflection for the Enterprise, the Gaudi 3-powered solution will “deliver empathetic, conversational, employee-friendly AI capabilities and provide the control, customization, and scalability needed for complex, large-scale deployments,” according to both companies. Is designed for.

Microsoft allows partners to sell business services in the marketplace
Microsoft has begun allowing partners to sell professional services in its marketplace in the US, Canada and the United Kingdom, a potential opportunity for solution providers looking for new ways to acquire and transact with customers. .
Partners can sell professional services As a stand-alone private offering in AppSource or the Azure Marketplace, Microsoft said this week. A transactional professional service in the marketplace can streamline a customer’s purchasing experience by merging the service onto an Azure invoice.
Partners can also bundle services with the software in a private offering if applicable, but according to Microsoft, partners cannot include the services as part of software-as-a-service (SaaS) or other software offerings Are. More geographies will become available for the offer in the future.
Microsoft’s professional services range from assessments to briefings, customer support, implementations, migrations, proofs of concept, and workshops.

Crowning Achievement: CRN’s 2024 Triple Crown Winner
And applause for 47 solution providers This Year’s CRN Triple Crown Awards List.
Each year CRN celebrates the solution providers who achieved success in creating the trifecta solution provider 500The largest solutions provider operating in North America by revenue; fast growth 150Ranking of the fastest growing solution providers; and this Tech Elite 250A roundup of solution providers that achieved the highest partner status and certification from leading IT vendors.
Winslow Technology Group, a leading provider of IT solutions, managed services and cybersecurity services, made the Triple Crown list for the seventh year running, more than any other company on this year’s list. Advanced Computer Concepts, ANM and Sterling Computers all appear in the sixth year list.
The Triple Crown Class of 2024 also features 19 companies making the list for the first time, including Arctic, Greymatter, NetFabric Solutions and The Redesign Group.

AMD Says Instinct MI325X Bests Nvidia H200, Vows Huge Uplift With MI350

While AMD says its forthcoming Instinct MI325X GPU can outperform Nvidia’s H200 for large language model inference, the chip designer is teasing that its next-generation MI350 series will deliver magnitudes of better inference performance in the second half of next year.

AMD said its forthcoming 256-GB Instinct MI325X GPU can outperform Nvidia’s 141-GB H200 processor on AI inference workloads and vowed that the next-generation MI350 accelerator chips will improve performance by magnitudes.
When it comes to training AI models, AMD said the MI325X is on par or slightly better than the H200, the successor to Nvidia’s popular and powerful H100 GPU.
[Related: Intel Debuts AI Cloud With Gaudi 3 Chips, Inflection AI Partnership]
The Santa Clara, Calif.-based chip designer was expected to make the claims at its Advancing AI event in San Francisco, where the company will discuss its plan to take on AI computing giant Nvidia with Instinct chips, EPYC CPUs, networking chips, an open software stack and data center design expertise.
“AMD continues to deliver on our roadmap, offering customers the performance they need and the choice they want, to bring AI infrastructure, at scale, to market faster,” said Forrest Norrod, head of AMD’s Data Center Solutions business group, in a statement.
The MI325X is a follow-up with greater memory capacity and bandwidth to the Instinct MI300X, which launched last December and put AMD on the map as a worthy competitor to Nvidia’s prowess in delivering powerful AI accelerator chips. It’s part of AMD’s new strategy to release Instinct chips every year instead of every two years, which was explicitly done to keep up with Nvidia’s accelerated chip release cadence.
The MI325X is set to arrive in systems from Dell Technologies, Lenovo, Supermicro, Hewlett Packard Enterprise, Gigabyte, Eviden and several other server vendors starting in the first quarter of next year, according to AMD.

Instinct MI325X Specs And Performance Metrics

Whereas the Instinct MI300X features 192GB of HBM3 high-bandwidth memory and 5.3 TB/s in memory bandwidth, the MI325X—which is based on the same CDNA 3 GPU architecture as the MI300X—comes with 256GB in HBM3E memory and can reach 6 TB/s in memory bandwidth thanks to the update in memory format.
In terms of throughput, the MI325X has the same capabilities as the MI300X: 2.6 petaflops for 8-bit floating point (FP8) performance and 1.3 petaflops for 16-bit floating point (FP16).
When comparing AI inference performance to the H200 at a chip level, AMD said the MI325X provides 40 percent faster throughput with an 8-group, 7-billion-parameter Mixtral model; 30 percent lower latency with a 7-billion-parameter Mixtral model and 20 percent lower latency with a 70-billion-parameter Llama 3.1 model.
The MI325X will fit into the eight-chip Instinct MI325X platform, which will serve as the foundation for servers launching early next year.
With eight MI325X GPUs connected over AMD’s Infinity Fabric with a bandwidth of 896 GB/z, the platform will feature 2TB of HBM3e memory, 48 TB/s of memory bandwidth, 20.8 petaflops of FP8 performance and 10.4 petaflops of FP16 performance, AMD said.
According to AMD, this means the MI325X platform has 80 percent higher memory capacity, 30 percent greater memory bandwidth and 30 percent faster FP8 and FP16 throughput than Nvidia’s H200 HGX platform, which comes with eight H200 GPUs and started shipping earlier this year as the foundation for H200-based servers.
Comparing inference performance to the H200 HGX platform, AMD said the MI325X platform provides 40 percent faster throughput with a 405-billion-parameter Llama 3.1 model and 20 percent lower latency with a 70-billion-parameter Llama 3.1 model.
When it comes to training a 7-billion-parameter Llama 2 model on a single GPU, AMD said the MI325X is 10 percent faster than the H200, according to AMD. The MI325X platform, on the other hand, is on par with the H200 HGX platform when it comes to training a 70-billion-parameter Llama 2 model across eight GPUs, the company added.

AMD Teases 35-Fold Inference Boost For MI350 Chips

AMD said its next-generation Instinct MI350 accelerator chip series is on track to launch in the second half of next year and teased that it will provide up to a 35-fold improvement in inference performance compared to the MI300X.
The company said this is a projection based on engineering estimates for an eight-GPU MI350 platform running a 1.8-trillion-parameter Mixture of Experts model.
Based on AMD’s next-generation CDNA 4 architecture and using a 3-nanometer manufacturing process, the MI350 series will include the MI355X GPU, which will feature 288GB of HBM3e memory and 8 TB/s of memory bandwidth.
With the MI350 series supporting new 4-bit and 6-bit floating point formats (FP4, FP6), the MI355X is capable of achieving 9.2 petaflop, according to AMD. For FP8 and FP16, the MI355X is expected to reach 4.6 petaflops and 2.3 petaflops, respectively.
This means the next-generation Instinct chip is expected to provide 77 percent faster performance with the FP8 and FP16 formats than the MI325X or MI300X.
Featuring eight MI355X GPUs, the Instinct MI355X platform is expected to feature 2.3TB of HBM3e memory, 64 TB/s of memory bandwidth, 18.5 petaflops of FP16 performance, 37 petaflops of FP8 performance as well as 74 petaflops of FP6 and FP4 performance.
With the 74 petaflops of FP6 and FP4 performance, the MI355X platform is expected to be 7.4 times faster than FP16 capabilities of the MI300X platform, according to AMD.
The MI355X platform’s 50 percent greater memory capacity means it can support up to 4.2-trillion-parameter models on a single system, which is six times greater than what was capable with the MI300X platform.
After AMD debuts the MI355X in the second half of next year, the company plans to introduce the Instinct MI400 series in 2026 with a next-generation CDNA architecture.

New Features In AMD’s Open Software Stack

AMD said it is introducing new features and capabilities in its AMD ROCm open software stack, which includes new algorithms, new libraries and expanding platform support.
The company said ROCm now supports the “most widely used AI frameworks, libraries and models including PyTorch, Triton, Hugging Face and many others.”
“This work translates to out-of-the-box performance and support with AMD Instinct accelerators on popular generative AI models like Stable Diffusion 3, Meta Llama 3, 3.1 and 3.2 and more than one million models at Hugging Face,” AMD said.
With the new 6.2 version of ROCm, AMD will support the new FP8 format, Flash Attention 3 and Kernel Fusion, among other things. This will translate into a 2.4-fold improvement on inference performance and 80 percent better training performance for a variety of large language models, according to the company.

Ryzen AI Pro 300 Series is the ‘best AI PC platform’ for businesses

The chip designer says Ryzen AI Pro 300 processors for AI-accelerated laptops combine leadership performance and efficiency with new enterprise-level security features such as cloud bare metal recovery as well as remote management and deployment capabilities.

AMD is calling its newly launched Ryzen AI Pro 300 processors “the best AI PC platform” for businesses, and saying they’ll bring new levels of security and manageability to Microsoft’s Copilot+ PCs that are due to hit the market next year. Are ready for.
According to Santa Clara, unveiled at its Advancing AI event in San Francisco on Thursday, the Ryzen AI Pro 300 processors deliver leadership performance and efficiency with new enterprise-level security features such as cloud bare metal recovery as well as remote management and deployment capabilities. Let’s add. California-based chip designer.
[Related: The Biggest HP Imagine 2024 News: From EliteBook X AI PC To On-Demand GPUs]
According to AMD, PC vendors are expected to use Ryzen AI Pro processors in more than 100 computer designs for businesses by 2025.
The Ryzen AI Pro 300 series is part of AMD’s third generation of processors for AI-accelerated laptops, and the chips will feature Zen 5 CPUs with up to 12 cores and 5.1 GHz maximum boost frequency, RDNA 3.5 GPUs with up to 16 cores, and up to 16 cores. Compute units with 50 to 55 trillion operations per second (TOPS) and an XDNA 2 neural processing unit (NPU).
John Anguiano, AMD product marketer, said that these specifications represent a “tremendous generational uplift” compared to its previous generation Ryzen 8040 series, which debuted at the beginning of the year and met the 40 NPU TOPS requirement set by Microsoft for Copilot+ PCs. Decreased from. Which started with Qualcomm chips in June.
Microsoft plans to make CoPilot+ PC features on supported AMD platforms available to Windows Insider Program members in November.
“Overall, this makes the Ryzen AI Pro 300 series the best in enabling next-generation AI experiences for the enterprise,” he said.
According to Anguiano, the new processors also enable multiday battery life in laptops.
“This means we are ready for the Windows 11 environment, and certainly ready for Copilot+ in a more robust, more performant, and battery-ready way,” he said.
The Ryzen AI Pro 300 series will initially include three models, ranging from the eight-core, 5GHz Ryzen AI 7 Pro 300 to the 12-core, 5.1GHz Ryzen AI 9 HX Pro 375.
With an NPU delivering 50 to 55 TOPs, the Ryzen AI Pro 300 series is faster than the 48 TOPs of Intel’s recently launched Core Ultra 200v processors and about 11 TOPs of the first-generation Core Ultra chips that debuted last year. NPU provides performance. That’s even faster than 45 TOPS of Qualcomm’s Snapdragon X chips for CoPilot+ PCs.
In a comparison of the latest Intel AI PC chips with vPro capabilities, AMD said the Ryzen AI 7 Pro 360 Core is 30 percent faster than the Ultra 7 165U, while the Ryzen AI 9HX Pro 375 Core is 40 percent faster than the Ultra. Is fast. 7 165h on Cinebench R24 n-thread test.
With the Procyon Office productivity benchmark, which tests Microsoft Office application performance, AMD said the Ryzen AI 7 Pro 360 is 9 percent faster than the Core Ultra 7 165U while the Ryzen AI 9HX Pro 375 is 14 percent faster than the Core Ultra 7 165H. The percentage is fast.
The rival previously said Intel is not expected to release a vPro version of its Core Ultra 200V chips until early next year.
Like previous Ryzen Pro processors, the Ryzen AI Pro 300 series features AMD’s Pro Technologies for security, manageability, and reliability.
What’s new in AMD’s Pro Technologies for the Ryzen AI Pro 300 series is on the security side, where there are four new features: 2nd Generation AMD Secure Processor, Cloud Bare Metal Recovery, Supply Security, and Watch Dog Timer.
Cloud bare metal recovery allows IT administrators to communicate with PC pre-operating systems via the cloud to recover the system without shipping the device. Supply chain security provides authentication for AMD’s systems-on-chips in customer platforms and enables “traceability” in the supply chain.
Watch Dog Timer, on the other hand, “enhances resiliency support through hang detection and recovery [system-on-chip] Procedures,” according to AMD.