‘We’re in a race to increase supply to meet incredible’ Blackwell demand

Nvidia finance chief Colette Kress said in her third-quarter remarks before CEO Jensen Huang addressed the issues with Blackwell, “There are some supply constraints in both the Hopper and Blackwell systems, and demand for Blackwell will remain the same for several quarters into fiscal 2026.” Expected to exceed supply by 2020.

Nvidia CFO Colette Cress said the company is “racing to deliver at scale to meet the incredible demand” for its recently launched Blackwell GPUs and related systems, which are designed to improve the performance and efficiency of Generator AI. It has been said to take a big leap.
Cress made the comments during the AI ​​computing giant’s Wednesday earnings call, where the company revealed that it third quarter revenue Nearly doubled to $35.1 billion, mainly due to continued high demand for its data center chips and systems.
[Related: Nvidia Reveals 4-GPU GB200 NVL4 Superchip, Releases H200 NVL Module]
In prepared remarks at the latest earnings, Kress said that Nvidia is about to start shipping blackwell productsNow in full production as of January, it plans to increase those shipments in its 2026 fiscal year, in line with the following months.
“Both the Hopper and Blackwell systems have some supply constraints, and demand for Blackwell is expected to exceed supply for several quarters into fiscal 2026,” they wrote.
On the call, Kress said Nvidia is “on track to exceed” its previous Blackwell revenue estimate of several billion dollars for the fourth quarter, which will end in late January, as its “visibility into supply continues to increase.” ”
“Blackwell is a customizable AI infrastructure with seven different types of Nvidia-made chips, multiple networking options, and air- and liquid-cooled data centers,” she said. “Our current focus is on meeting strong demand, increasing system availability and providing our customers with the optimal mix of configurations.”

Jensen Huang addresses Blackwell issues, road map execution

During a question-and-answer session of the call, a financial analyst asked Nvidia CEO Jensen Huang to address Sunday report By industry publication The Information, which details some customer concerns about overheating of Blackwell GPUs in the most powerful configuration of its Grace Blackwell GB200 NVL72 rack-scale server platform.
The GB200 NVL72 is expected to serve as the basis for upcoming major Nvidia AI offerings from major OEMs and cloud computing partners, including Dell Technologies, Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud.
In response to a question about The Information’s report on overheating of the Blackwell GPUs in the GB200 NVL72 system, Huang said that while Blackwell’s production is “all in” with the company exceeding previous revenue projections, he added that engineering What Nvidia does with OEMs and cloud computing partners is “rather complicated.”
“The reason is that although we build the full stack and the full infrastructure, we separate all the AI ​​supercomputers and we integrate them into all the custom data centers and architectures around the world,” he said on the earnings call. Said.
“That integration process is something we have done for generations. We’re pretty good at it, but there’s still a lot of engineering to be done at this point,” Huang said.
Huang noted that Dell Technologies announced Sunday that it has begun shipping its GB200 NVL72 server racks to customers, including emerging GPU cloud service provider CoreWeave. He also mentioned the Blackwell system that is being built by Oracle Cloud Infrastructure, Microsoft Azure and Google Cloud.
“But as you can see from all the systems that are being put in place, Blackwell is in very good shape,” he said. “And as we mentioned earlier, the supply and turnaround we plan to make this quarter exceeds our previous estimates.”
Addressing a question about Nvidia’s ability to execute on its data center road map, which moved to an annual release cadence for GPUs and other chips last yearHuang remained steadfast in his commitment to the accelerated production plan.
“We are on an annual roadmap, and we are looking forward to continuing to execute on that annual roadmap. And by doing so we increase the performance of our platform. But it’s also really important to understand that when we’re able to increase performance and do so at X factors at a time, we’re reducing the cost of training, we’re reducing the cost of inference, and we’re reducing “We are reducing the cost of AI so that it can be more accessible,” he said.

Partner’s opinion on Nvidia growth, investors’ reaction

After Nvidia released its third-quarter earnings, investors reacted, sending the company’s share price down more than 1 percent in after-hours trading.
While the company beat Wall Street expectations on revenue by nearly $2 billion and beat the average analyst estimate for earnings per share by 6 cents, its fourth-quarter revenue estimate came in just slightly higher than Wall Street expected. Was.
Andy Lin, CTO Top Nvidia Partners Mark III Systems in Houston, Texas, told CRN that while demand for Nvidia’s data center GPUs and related systems is “obviously incredibly strong,” it has set a “such a high level” for itself, with triple-digit growth in multiple quarters. have set.
“These numbers are still surprising, especially on a year-over-year comparison. But this is clearly a smaller increase than before,” he said.
However, Lin said, some customers are holding off on making any purchases right now because Nvidia is switching from Hopper-based systems to Blackwell-based systems.
“There are certainly a number of organizations that we look at that certainly haven’t spent [money on new infrastructure and are waiting] On Blackwell. So the question is, how many of them, at what scale, and what will that look like? And I think that might be something that the market is probably underestimating a little bit,” he said.

Apache’s VM Essentials brings partners back into the ‘virtualization game’

“I think the addition of VM Essentials will be a catalyst for customer investment and will drive business growth in the channel,” says Hang Tan, COO of Apache Hybrid Cloud. Hewlett Packard Enterprise’s new VM Essentials product engages partners in the virtualization software sales arena with a new offering that packs a big price performance punch, said Hang Tan, HP Hybrid Cloud Chief Operating Office.

“This really brings partners back into the virtualization game,” said Tan (pictured above) in an interview with CRN. “A lot of customers are putting off infrastructure IT decisions and delaying purchasing decisions and partners are feeling this. I think the addition of VM Essentials will be a catalyst for customer investment and will drive business growth for the channel.”
hpe unveils vm essentials hpe Discover the Barcelona conference with backup and recovery services from Cohesity and Commvault.
[Related: Partners Cheer New HPE Virtualization Capability Being Showcased At HPE Discover]
The per-socket pricing on VM Essentials is a sharp contrast to the much-maligned per-core pricing model implemented after Broadcom acquired VMware in a $61 billion blockbuster acquisition.
Broadcom’s VMware per-core pricing model combined with a reduced virtualization product portfolio has led to sharp price increases, which has been widely criticized by partners who have adopted VMware alternatives such as Nutanix and Scale Computing.
Now VMware is entering the virtualization market for the first time with an open source-based KVM hypervisor, which promises up to five times the total cost of ownership benefits for enterprises moving to VM Essentials and Apache GreenLake Cloud.
“Our per-socket pricing makes it very attractive for customers to re-platform five or six-year-old legacy infrastructure onto new modern sustainable, more secure infrastructure,” Tan said. “On top of that, partners can layer services to create a complete private cloud with self-service for the end customer or a managed offering for you. This really brings the partners back into the game in 2025.
Virtually every customer brings up virtualization as a ripe opportunity for Apache, Tan said. “Every customer we’ve talked to on virtualization (the topic) has come up,” he said. “I can’t tell you the size (of the market) right now but I’m very bullish. We’re hearing it everywhere.”
Even though partners see it as a head-to-head alternative to Broadcom’s VMware Cloud Foundation model, Tan stressed that VM Essentials as a standalone offering “complements” the “mature full stack” VMware Cloud Foundation platform. Is.
“They’re a very strong partner of ours,” he said of VMware. “We are very committed to that relationship. But there will also be other unique customer situations where they don’t need full-stack capabilities. Those are the workloads that can run on VM Essentials. I think it’s going to be very complementary, and it’s going to drive a lot of business in terms of both infrastructure and services for the partners.
Apache partners have been eagerly awaiting the release of the new virtualization software since it was unveiled by Apache at the Apache Discover conference in June. At the time, partners welcomed the announcement with hearty applause and praised VMware for providing a reliable alternative.
Executive Vice President and CTO of HPE Fidelma Russo tells CRN at Apache Discover It said in June that the new hypervisor was a response to partner and customer demand for a virtualization partner that was “consistent,” “reliable” and that they could “relate to” moving forward. “We never would have done this if the customers weren’t motivating us,” Russo said.
Seibold, GreenLake partner and vice president of service provider sales Ulrich (Uli), said that Apache is making a splash in the market amid all the “noise” and “frustration” that partners are experiencing. “Each partner tried to motivate us to help us with an alternative,” Seibold said. “Now we are the alternative. This is complementary. This is not a replacement. But now we are the alternative.”
HBO’s leading position as a hybrid cloud leader with hardware and software offerings offers partners the opportunity to add their “intellectual property” on top of VM Essentials and the Apache GreenLake platform, Siebold said. “This makes us so relevant to partners – not just for today – but even more so in the future,” he said. “This is a huge market for us… We are extremely excited at the moment and the response from partners has been phenomenal.”
In its press conference on the new offering, VMware said there were more than 100 beta applications that received strong praise from partners. Among the quotes provided by Apache from beta testers: “Love your per-socket pricing… it’s really coming together in a very integrated and concise way”; ‘The solution hit all the basics: VM creation, snaps, imaging, live migration, load balancing,’; ‘I am impressed by the completeness of the solution. I can start using it straight away in production.
Tan, for his part, said VM Essentials opens the door for partners to “optimize and offset cost overruns” associated with their customers’ virtualization IT estates. In addition to attractive socket pricing, HBO’s CloudPhysics cloud financial assessment tool provides detailed recommendations for cost efficiency that take into account hardware utilization rates and storage consolidation. “This is a great part of what partners can offer their customers,” Tan said.
The call to action, Tan said, is for partners to move forward to “modernize” their customers with “future-proof” full hybrid cloud capability.
Tan said, “HPE GreenLake Morpheus provides the opportunity to simplify hybrid cloud estates, such as multicloud management and OpsRamp monitoring and automation. “It’s really about optimization, modernization and simplification,” he said.
Tan called the new VM Essentials standalone offering — which will be formally available in December — a much-needed “modular” software offering sought by partners that can run on either Apache or third-party hardware.
“When we announced it (in June) at (HPE) Discover, everyone was very excited, but they pulled us aside and told us it would be really cool if we sold it alone,” Tan said. ” “A lot of customers have recently invested in their IT infrastructure and they want to be able to take advantage of that and protect that investment.”
Seibold said the message to midmarket-focused partners is not to try to replace pre-existing virtualization software licenses, but to focus on new workloads.
“If customers want to scale and expand, rather than purchasing very expensive additional licenses we offer them managed virtual machines in a holistic environment in a more efficient, as-needed way for new workloads or some new environment.”
Apache did not release specific pricing on the new software. That said, partners told CRN they expect to see an attractive bundle with Apache private cloud solutions. In fact, some said, Apache is considering offering a free virtualization subscription and support out of the gate for the Private Cloud Business Edition.
private cloud business edition with vm essentials software will be available in spring 2025, hpe said
Tan said he sees the combined virtualization capability with the Apache GreenLake hybrid cloud as an opportunity for partners to help customers “rethink” their IT strategy for the future. In fact, he said, “forward-thinking” partners see this as an opportunity to “future proof” their customers.
Morpheus is a critical element of a future-proof IT estate, Tan said, providing a self-service IT capability across multi-cloud environments encompassing the full multi-cloud spectrum, including Amazon Web Services, Azure, Red Hat and Nutanix. Is.
“Morpheus allows IT departments to become service providers for businesses and developers,” he said. “It’s a platform to sell VMs or Kubernetes clusters and keep application catalogs and blueprints. That allows self-service IT departments to maintain governance, cost control, and monitoring.
The big splash channel impact of Morpheus — which provides orchestration and lifecycle management for VMs and containers — is the ability for partners to become service providers for their customers, Tan said. Add in OpsRamp monitoring and observability capabilities and partners will “essentially” have a complete MSP toolkit. “We are essentially building an MSP kit and you can build an entire service business around it,” he said. “I would argue that we are the most partner-forward visionary company in the market that is really taking our partners’ relationships with their customers to the next level.”
Tan praised the Apache team for moving quickly to deliver a virtualization offering that meets the needs of both customers and partners. “We care deeply about our customers and partners and we respond to demand,” he said. “The company really got proactive around it.”

Deloitte is planning Bonanza, a GenAI agent app based on Apache Private Cloud AI

“The time to result, the time to value on this is significantly reduced (with Apache Private Cloud AI),” says Abdi Goodarzi, principal at GenAI, Deloitte Consulting LLP. “We have our full commitment with the (Deloitte-Nvidia-HPE) partnership to bring this to the enterprise as quickly as possible.”

Deloitte – a $67.2 billion global technology strategy, tax and systems integration giant – is building Bonanza, a GenAI agent application based on Hewlett Packard Enterprise’s private cloud AI service.
The first of a wave of Deloitte GenAI agent applications based on Apache Private Cloud AI was unveiled at Apache Discover Barcelona as part of an expanded collaboration agreement between Deloitte, Apache and Nvidia, which will be powered by Nvidia AI Computing by the Apache Private Cloud AI service. Was focused on.
“We’re going to create (AI) agents for almost every part of the business, from the front office to the back office to the center of the business,” said Abdi Goodarzi, principal and leader of GenAI products and innovations at Deloitte Consulting LLP. In an interview with CRN. “This is basically the wave of the future. This is the era of AI enablement. We are investing huge amounts in this area. Our aim is to help our customers build more efficient businesses.”
The first of the Apache Private Cloud AI-based GenAI agent offerings is Deloitte C-Suite AI for CFOs. That GenAI Agent offering provides GenAI financial analysis, modeling and competitive market analysis to chief financial officers.
Another new offering from Deloitte, Atlas AI, is a GenAI solution designed to accelerate life science discoveries focused on molecular modeling and drug discovery.
Deloitte LLP has a 2025 product road map based on the Apache Private Cloud AI service that includes a “significant number” of new GenAI agents in the finance sector, Goodarzi said.
“In 2025, we are going to focus on supply chain, sales, services, marketing and tax (AI agent applications),” Goodarzi said. “Those are areas that many customers have told us they would welcome these advanced capabilities and enhancements to.”
Deloitte is leading the way with a broad spectrum of GenAI agent apps, from simple agents to assist clients to advanced agents that process large amounts of data.
The Deloitte GenAI Agent app covers dozens of categories ranging from many aspects of aggressive financing, such as invoice processing or sourcing and procurement, to planning, product lifecycle management, warehouse management and supply chain with logistics, Goodarzi said.
Virtually every industry has business processes like claims processing for insurance companies that can be accelerated with the use of GenAI, Goodarzi said. He said the net effect of such insurance claims AI agents will be increased customer satisfaction and increased sales.
Goodarzi called the Apache Private Cloud AI Platform a “significant game changer” that will bring dramatic improvements to all types of business processes. “This is a significant commitment for Deloitte to partner with Nvidia and Apache,” he said. “We have an AI factory with thousands of developers and thousands of AI practitioners, data scientists who understand how you can bring these ideas and innovations to life. “Our goal is to enable this as quickly as possible.”
The Deloitte GenAI agent design and architecture is based on very “short cycle” development times, Goodarzi said, with the aim of rapidly delivering AI benefits to customers.
“The time to results, the time to value on this (with Apache Private Cloud AI) is significantly reduced,” he said. “We have our full commitment with the (Deloitte-Nvidia-HPE) partnership to bring this to the enterprise as quickly as possible.”
The Apache Private Cloud AI service with Apache GreenLake offers at least a “five to ten times” cost advantage over the standard infrastructure offering, Goodarzi said. “How do you do this if you don’t have the power of PC AI?” he asked. “You will need a lot more computing power. You need a lot of data. You need a lot of networking.”
In fact, Goodarzi said that Apache Private Cloud AI allows Deloitte clients to achieve faster times to results. “On day one you can actually get results from the (HPE) PC (private cloud) AI once it’s connected to your system,” he said. “Obviously there is a shorter deployment cycle to connect PC AI to your data and integrate it with your security… Once connected it starts consuming the data and understanding the data and generating results immediately.” Starts doing it. No other system can do this!”
Cherry Williams, senior vice president of GreenLake Flex Solutions and Common Services, said that Deloitte C Suite AI was enabled in less than three days.
Additionally, Williams said Deloitte has added two additional use cases on PC AI. “The ability for Deloitte to enable these use cases on PC AI at such a fast pace means they are then able to offer similar or even faster deployment of AI use cases to their clients,” he said.
Williams said, “One of the challenges customers face is really enabling their GenAI use cases and they’re turning to partners like Deloitte to help them figure out how to enable those use cases.” be enabled.”
Williams said the partnership with Deloitte will ultimately allow Apache to reach and “empower” much more customers with GenAI-based services.
Goodarzi predicted that 2025 will be a key year for customers to leverage their enterprise data to create GenAI-based agents. “Customers need to think about enterprise data strategies and platforms and how they can prepare themselves for the arrival of silicon and infrastructure that could give them a significant boost,” he said.
Goodarzi emphasized that talented employees will be at the center of the new wave of GenAI agent application offerings. “Humans are at the center of it and will always be at the center of it,” he said. “So get your data right. Get your talent right. Get your risk and regulatory expectations right. (HPE) Invest in PC AI and other infrastructure. By then you’re able to really activate all these agents and other innovations and harness them to accelerate your business and really become disruptive in your industry.

10 hottest networking startups of 2024

The hottest networking startups of 2024 focused their energies on the edge, private 5G, multi-cloud networking, and AI.

Newcomers to networking are carving out a niche for themselves as several trends are gaining momentum and enterprises look to refresh their infrastructure.

For starters, edge, multi-cloud and private networking capabilities are increasingly becoming necessities for many enterprises grappling with hybrid work or changing their physical office blueprint to better adapt to how their employees are working and doing business today. We are doing it to better match the reality. Also, AI has become not just a buzzword but a new reality. As AI use cases become more prevalent, especially in operational environments, being able to accommodate new computing and storage requirements, or adding AI in new locations on the network and at the edge of the network The networking infrastructure needs to change frequently.
Edge and private networking startups have exploded onto the scene, as well as cloud networking upstarts that are providing a new way for enterprises to achieve secure networking through consumption-as-a-service and consumption-based models so that businesses can Don’t have to break. Banks are modernizing their networks. To that end, many of these market newcomers are raising new rounds of funding to help them expand their reach and continue their innovative work.
From those specializing in multi-cloud-based offerings and networking-as-a-service to edge networking and AI, here are the 10 hottest networking startups of 2024.

Alkira
Upstart Alkira specializes in agentless, multi-cloud networking. The San Jose, California-based company emerged from stealth mode in 2020 with its consumption-based Cloud Services Exchange (CSX), an integrated, on-demand offering that lets cloud architects and network engineers build and deploy multi-cloud networks Is. minutes. Since then, the company has unveiled a collaboration with the Microsoft for Startups program, as well as established a deeper relationship with Amazon Web Services, whose marketplace also includes Alkira CSX.
Network Infrastructure-as-a-Service Specialist in October launched its cloud-based Zero Trust Network Access (ZTNA) serviceNetwork Infrastructure-as-a-Service, an offering that will further simplify security and networking for enterprises, according to the startup.
Alkira announced the closing of a $100 million Series C funding round in May, bringing the company’s total funding raised to date to $176 million.

Avis Network
Founded in 2019, Avis Networks has been hard at work innovating on its brand of open networking software for cloud-scale infrastructure. According to San Jose, California-based Avis, the company specializes in building open, cloud and AI-first networks that prioritize choice, control and cost savings.
The company introduced its One Data Lake earlier this year, launched CoPilot, a generative AI conversational network, and expanded its packet broker offering for applications and 5G General Packet Radio Service (GPRS) Tunneling Protocol (GTP) use cases, among other advancements. Upgraded.
Avis, which counts several major investors including Cisco Investments as backers, also has a partner program for reseller and distributor partners.

cape
Cape, founded two years ago, bills itself as a privacy-first mobile carrier that focuses on connecting people without compromising security and privacy. The carrier offers nationwide 5G and 4G coverage while preventing hackers and spam.
Cape competes with existing US-based carriers including AT&T and Verizon. The company partners with US Cellular on the physical infrastructure, runs its own voice service on top, and it runs its own mobile core.
In April, Cape raised $61 million in funding from A*, Andreessen Horowitz, XYZ Ventures, X/Ante, Costanoa Ventures, Point72 Ventures, Forward Deployed VC and Karman Ventures.

Selona
Wireless specialist Celona burst onto the networking scene four years ago with a platform that lets enterprises build and deploy 5G/4G LTE private networks, filling a huge gap in the wireless connectivity market at the time , interest in personal networking in particular had increased dramatically. venture over the years.
The Cupertino, California-based company enters the market through a strategic partnership with Apache Aruba for the resale of Celona’s cellular products. The channel-friendly company also works with partners through Solution Provider Partner ProgramIn October, Celona announced AirLock, a new suite of security capabilities for private 5G wireless network security.
Celona’s most recent – ​​and oversubscribed – funding round was a $60 million Series C round in 2022.

alicity
Elisity is an expert in network segmentation that leads the market with its IdentityGraph technology.
According to the company, the San Jose, California-based startup’s identity-based microsegmentation technology helps enforce granular controls over users and devices, allowing organizations to employ network segmentation to protect against threats and limit the blast radius. Gives.
Alicity was founded in 2018. Company CEO James Winebrenner joined the company in 2020 after about a year with multicloud networking player Aviatrix Systems. Elicity also has a partner program for solution providers and system integrators.
In April, Alicity raised a $37 million Series B round of funding from Insight Partners. The company said it is using the funding for AI capabilities to predict and prevent cyber threats.

Highway 9 Network
Santa Clara, California-based upstart Highway 9 Networks offers a cloud-native platform that the company said is built for enterprise mobile users, applications and AI-powered devices.
According to Highway 9, the startup’s Edge product provides distributed network functions with integration with enterprise IT and major telcos.
Highway 9 also works with partners through its Mobile Cloud Alliance program to bring its connectivity and private 5G to end customers.
Highway 9 launched in stealth mode in February this year and revealed that it had raised $25 million in funding from Mayfield, General Catalyst, and Detroit Ventures.

Natalie
NetAlly is on a journey. The company began as a business unit of Fluke Networks. It was then part of NetScout Systems and became a standalone player until 2019. Today, NetAlley comes to market with its portfolio of switching, wireless, IP surveillance, storage, and security products.
NetAlly has approximately 50,000 global customers in 70 countries. The company does 100 percent of its business through channel partners, which includes approximately 300 solution provider partners globally, about 30 percent of which are MSP partners, the company told CRN in October. That same month NetAlly brought on Jeff McCulloughA 25-year channel veteran, as the new vice president of sales for North America.

Indigo
Nile, the next-generation networking services platform provider backed by former Cisco CEO John Chambers, came out of stealth mode in 2022 with its “reimagined” wired and wireless service, delivered entirely as a service. The company’s Enterprise Network as a Service (NaaS) offering aims to bring another option to the market for businesses that is different from what many of the market giants like Cisco are offering today.
The San Jose, California-based startup in March launches its full AI services platform With AI applications aimed at automating network design, configuration, and management. Nile AI Architecture includes Nile Services Cloud, which includes AI-based network design; Nile Service Block, which automates network deployment, including Nile Copilot and Nile Autopilot applications for access point configuration and AI-based network monitoring and operations.
In 2023, Nile raised $175 million in a Series C funding round, bringing its total funding to $300 million.

Proximo
Multi-cloud networking disruptor Prosimo emerged from stealth in 2021 with its Application eXperience Infrastructure (AXI) platform, which is modernizing and simplifying application delivery and experience in multi-cloud environments. According to the company, the Proximo platform can co-exist with existing vendors in a customer’s environment or be used to replace certain tools and features, such as zero trust or cloud peering.
The Santa Clara, California-based startup in June Security expert joins Palo Alto Networks To further secure application access in multi-cloud environments. Prosimo’s head of marketing told CRN that the partnership was important as the company looks to partner with more Fortune 500 companies.
The company raised $30 million in Series B funding in its latest funding round in 2022.

recognize
Upstart Recogni, which focuses on AI-based computing, builds compute systems that can provide multimodel GenAI inference for data center environments. The company said it has noticed a problem that many generative AI systems today are inefficient and consume too much power. San Jose, Calif.-based Recogni’s technology, however, is helping meet the larger compute requirements of AI workloads.
Recogni’s latest $102 million Series C funding The round was co-led by Celesta Capital and Greatpoint Ventures in February. Juniper Networks revealed in November that it had invested in the startup as part of a Series C funding round.

MongoDB expands Azure integration, boosts real-time analytics and GenAI

The database and development platform provider is announcing several initiatives at Microsoft Ignite this week that make it easier for customers and partners to work with MongoDB on the Azure cloud.

MongoDB is expanding the scope of integration between its cloud database development platform and Microsoft Azure, which the company says will make it easier for partners and customers to build real-time data analytics links and develop generative AI applications.
Today in this week’s series of announcements Microsoft Ignite ConferenceMongoDB is integrating MongoDB Atlas cloud databases with Microsoft’s Azure OpenAI services and launching its MongoDB enterprise advanced database management tools on the Azure Marketplace.
MongoDB said the new integrations will provide partners and customers with greater flexibility in data development on Azure – particularly to help meet growing data demand. aye and generic AI applications.
[Related: MongoDB CEO Ittycheria: AI Has Reached ‘A Crucible Moment’ In Its Development]
“I think the pace is phenomenal, things are changing daily,” Alan Chhabra, executive vice president of worldwide partners for MongoDB, said in an interview. crn About the rapid growth of AI and GenAI development. He said that experimentation with GenAI, especially within large enterprises, “is through the roof.”
Despite competing with Microsoft and its Azure Cosmos database, MongoDB is constantly expanding its alliance with Microsoft – as well as its own partnerships Amazon Web Services And google cloud – In recent years.
Last year MongoDB extended its multi-year strategic partnership with Microsoft, committing to a number of initiatives including closer collaboration between the two companies’ sales teams and making it easier to migrate database workloads to MongoDB Atlas on Azure. After this, such steps were taken in 2022 which allowed developers to Work with MongoDB Atlas Through the Azure Marketplace and the Azure Portal.
“Microsoft has become our fastest growing partnership,” Chhabra That said, seeing how MongoDB and Microsoft sales reps collaborate to sell MongoDB for Azure, specifically for AI and GenAI development.
At the Ignite event on Tuesday, MongoDB announced that customers building applications powered by recovery-augmented generation (RAG) can now select MongoDB Atlas as a vector store in Microsoft Azure AI Foundry, bringing MongoDB Atlas’s vector capabilities to Microsoft. Can combine with Azure’s generative AI tools and services. and Azure Open AI Service.
The company said this makes it easier for customers to enhance large language models (LLMs) with proprietary data and create unique chatbots, co-pilots, internal applications or customer-facing portals that leverage up-to-date enterprise data. And are based on context.
Chhabra said the new capabilities are designed to help customers develop and deploy GenAI applications. “It is not easy. There is a lot of confusion. It also has a lot of uses, because everyone knows they need to use it. [but] They’re not sure how.
“This integration will make it easy and seamless for customers to deploy RAG applications by leveraging their proprietary data in conjunction with their LLM,” Chhabra said.
in may MongoDB launched MongoDB AI Applications Program (MAAP) which provides an entire technology stack, services, and other resources to help businesses develop and deploy large-scale applications with advanced generative AI capabilities.
Chhabra said MongoDB systems integration and consulting partners will benefit from the new integration “because we’re making it easier for them to deploy General AI pilots and help take them into production for customers.”
Chhabra said that while larger enterprises are doing a lot of AI development and experimentation in-house, SMBs are looking for more fully packaged AI and GenAI solutions.
“I believe there is a big play for ISV application [developers] Those who are building purpose-built GenAI applications in the cloud on Azure, leveraging the MongoDB stack, leveraging our MAAP program,” Chhabra said. “So instead of customers having to build, they can buy GenAI solutions. When large companies like Microsoft work with cutting-edge, growing companies like MongoDB, we make it easier for customers and partners to deploy GenAI. [and] The entire ecosystem benefits.”
In another announcement from Ignite, MongoDB said that users who want to get the most insight from operational data can now do so in real-time with open mirroring in Microsoft Fabric for MongoDB Atlas. According to MongoDB, this connection keeps data in sync between MongoDB Atlas and OneLake in Microsoft Fabric, helping to generate real-time analytics, AI-based predictions, and business intelligence reports.
And the announcement of the launch of MongoDB Enterprise Advanced on the Azure Marketplace for Azure Arc-enabled Kubernetes applications gives customers greater flexibility to build and operate applications in on-premises, hybrid, multi-cloud, and edge Kubernetes environments.
Eliassen Group, a Reading, Mass.-based strategic consulting company that provides business, clinical and IT services, will use the new Microsoft integration to foster innovation and provide greater flexibility to its clients, MongoDB said.
“We have seen the incredible impact MongoDB Atlas has had on our customers’ businesses, and we are equally impressed by the capabilities of Microsoft Azure AI Foundry. “Now that these powerful platforms have been integrated, we are excited to combine the best of both worlds to create AI solutions that our customers will love as much as we do,” said David E., Vice President of Emerging Technology at Eliassen Group. said Vice President Colby Capps. In a statement.
The new extensions to the Microsoft alliance come a little more than a month after MongoDB introduced MongoDB 8.0, a significant update to the company’s core database that offers improved scalability, optimized performance, and enhanced enterprise-grade security .

Hitachi Vantara Expands Hitachi IQ AI Infrastructure with Nvidia HGX Platform

By adding Nvidia’s HGX H100 and H200 GPUs to the Hitachi iQ AI-ready infrastructure, Hitachi Vantara aims to bring it to a wider range of enterprises and ultimately to mid-range customers looking for a complete technology stack for AI workloads. Wants to take.

Data storage and infrastructure technology developer Hitachi Vantara on Tuesday unveiled a new version of its Hitachi IQ AI solution stack that includes Nvidia’s HDX GPUs.
Hitachi iQ, unveiled earlier this year, brings together Hitachi Vantara’s AI-ready infrastructure technologies with Nvidia’s GPU technologies, said Tanya Loughlin, director of AI and unstructured solutions product marketing at Hitachi Vantara.
Laughlin told CRN that the Hitachi IQ brand includes the AI-ready infrastructure stack and services that the company brings to market. The first was in July with the launch of the Hitachi iQ with Nvidia DGX GPUs and Nvidia BasePod certification, he said.
On Tuesday, Hitachi Vantara is officially launching the Hitachi iQ with Nvidia’s HGX GPUs, Laughlin said.
Currently, with the DGX-version, Hitachi iQ AI systems are integrated in the field. However, HGX versions are integrated by Hitachi Vantara before shipping to channel partners and customers for final configuration, he said.
“What’s exciting about it is that we’re reselling it,” he said. “Nvidia’s DGX can only be sold by DGX-certified partners. With HGX, we are packaging and reselling the entire solution. Our infrastructure, our storage, Nvidia components, the HDX compute layer, as well as Nvidia’s AI enterprise, which is basically a framework for building AI applications. We will be able to package it, and it will go directly to customers from our distribution center.
The DGX was one of the original high-end GPU servers, built by Nvidia with eight H100 or H200 GPUs, and was originally sold by Nvidia to the company’s ODM partners, said Gary Hemminger, senior director of AI solutions product management at Hitachi Vantara. Was sent to.
“The HGX H100 and H200 that we’re selling are basically the SuperMicro version of the DGX,” Hemminger told CRN. “It has the same parts. About 90 percent of the system’s value is still the GPU, but it’s basically the same system. In fact, we tested the BasePod with the DGX. Then once we got the BasePod certification, we removed the DGX and replaced it with the HDX and confirmed that they get basically the same performance. Actually, the HDX is a little higher, but basically the same capacity, same GPU, same network interface, same performance.
The Hitachi iQ integrated system includes the Nvidia HGX H100 system based on the Supermicro SuperServer; Nvidia software including Nvidia Base Command Manager, Nvidia AI Enterprise, and Nvidia GPU Direct Storage; Hitachi Content Software for files with Hitachi Vantara storage nodes; Nvidia networking components; and Hitachi Vantara HA810 G3 servers for management.
The path to market is different for the DGX and HGX versions, Hemminger said.
“Going to market with DGX is basically getting into the channel and working with DGX-authorized resellers to differentiate our storage solution and our other solutions so they can plug them in,” he said. “With HDX, our goal is that we can sell everything.”
Laughlin said this gives it a fully integrated stack solution for resale to partners who are not DGX-certified.
“We are in multiple talks with DGX-certified partners to sell them. …So there are a number of different routes to market that we’re working on with partners,” she said.
Hitachi Vantara expects Hitachi IQ’s primary workloads to be primarily large language model building, fine-tuning and RAG (retrieval-augmented generation), Hemminger said.
“Maybe not too much can be predicted,” he said. “You’ll probably see it more in mid-range to entry products. In terms of industry verticals and use cases, our focus right now is mostly focused on where we are strong, vertical wise, which is banking and financial, healthcare, telecom and manufacturing.
Hemminger said Hitachi Vantara has not seen any supply issues related to recent reports that Nvidia is shifting some orders away from Supermicro.
“In terms of supply, it is not an issue at this point in time,” he said. “In fact, the major issue has been changes in US export laws. It has accepted country list, banned list and restricted list. For restricted list, you basically have to get special permission. ‘forbidden.’ The approved list included about 50 countries from around the world. Now it has come down to 25. So more countries have been put on the banned list, and so now the burden is on us, Nvidia and Supermicro because we have to get exceptions for those banned list countries every time. “There’s really no change in supply, it’s just which countries we can sell to without doing anything.”
The extension of Hitachi IQ is really an evolution of what Hitachi has done, said Dave Cerniglia, president of Consilient Technologies, an Irvine, California-based solutions provider and longtime Hitachi Vantara channel partner.
“The message from Hitachi Vantara has been very consistent,” Cerniglia told CRN. “It’s all about data and what you do with that data. You have these enterprises and medium-sized companies that are collecting massive amounts of data. And what is AI? AI is just another way of how you leverage that data, and how you leverage those workflows to make an organization more profitable.
Cerniglia said Hitachi Vantara is continuously unveiling technologies that are increasingly robust and scalable.
“Hitachi focuses on scalability, performance, which they can deliver to their customers with massive amounts of data,” he said. “So for me, this is the AI ​​story. It’s really, how do you turn a company’s data into intelligence? How can you turn that data into an asset? So I’m excited. I think this is another way for us to be able to offer customers more of what they want.”

DOJ is pursuing ‘a radical agenda’ by forcibly shutting down Chrome sales

The US Justice Department is reportedly looking for a judge to force Google to sell Google Chrome, the world’s most popular internet browser.

In a move that could hit $88 billion Google and its cloud business, Google Cloud, The US Justice Department is reportedly seeking to ask a judge to force Google to sell the world’s most popular internet browser: chrome,
“The DOJ is pursuing a radical agenda that goes far beyond the legal issues in this case,” Lee-Anne Mulholland, Google’s vice president of regulatory affairs, said in a statement to CRN.
“The government putting its thumb on the scale in this way will hurt consumers, developers, and American tech leadership exactly when it’s needed most,” Mulholland said.
Officials from the US Justice Department’s Antitrust Division will ask US District Judge Amit Mehta – who has Judgment given against Google In relation to monopoly in the past – to coerce Google will sell its Chrome browseraccording to a report By Bloomberg News.
[Related: The 10 Coolest GenAI Products And AI Tools Of 2024]

What is the DOJ looking for?

DOJ is reportedly seeking to break up Google Android From Search and Google Play, but without obligating Google to sell Android.
Bloomberg said another requirement would force the Mountain View, California-based tech giant to share more information with advertisers.
The DOJ will also reportedly ask Judge Mehta to impose data licensing requirements.
Another recommendation from the DOJ is that Google provide more options to prevent websites from using their content. Google’s artificial intelligence The product, according to a Bloomberg report.
The government will also insist on banning exclusive Contracts with iPhone providers like Apple The report states that Chrome will have to be made its default web browser.

Chrome has more than 65 percent share of the global browser market

According to IT market research firm Statista, Google Chrome’s share of the global market for internet browsers by August 2024 was more than 65 percent, or nearly two-thirds.
Apple’s Safari browser ranks second among Internet browsers with about 18 percent share. Statista said no other browser controlled more than 5 percent of the overall market share worldwide.
Google Cloud has several key technologies inside Chrome, including its zero-trust security offering Chrome Enterprise Premium, as well as Chrome Enterprise Core, which allows businesses to configure and manage the Chrome browser across their organization.
Additionally, Google Cloud’s flagship AI offering, Gemini, It is becoming more important for Google Chrome’s search engine, while many users have access to it workplace Applications like Gmail and Google Docs through Chrome.

Google partner: ‘Government may be exaggerating’

One Google partner, who makes millions of dollars each year from Google sales, said he believed “the government is probably overstepping here.”
“Chrome is loved by our customers. It’s effective for a number of reasons, like its security features, which we believe have nothing to do with monopoly and more to do with their technology,” said a top Google partner executive. Said on condition of anonymity.
The executive said that if Google Chrome is sold from parent company Google, it would fundamentally change the operational structure of both Google and Google Cloud.
“So Google Chrome is directly tied to its advertising business, and obviously that’s important to the company. That money drives innovation and R&D across Google,” he said. “You remove Chrome from Google, it may start to break a little. …This will certainly have a broad impact on Google Cloud.”
The DOJ will ask District Judge Mehta to force the breakup because it represents the access point through which people use its search engine, Bloomberg News reports.
Bloomberg said the government has the option to decide whether Chrome sales are necessary at a later date if some other aspects of the measure create a more competitive market.
DOJ representatives did not respond for comment by press time.

Accenture launches new suite of GenAI-focused security services

Paolo Dal Sin of Accenture Security told CRN that services are ready to use the technologies to secure GenAI use, as well as protect against AI-powered attacks and security capabilities.

A newly announced set of Accenture cybersecurity services aims to secure customer use of GenAI, as well as protect against GenAI-powered attacks and enhance existing security capabilities, the head of Accenture Security told CRN.
IT consulting giant Accenture, number 1 on CRN solution provider 500 The new services were launched on Tuesday amid a shift in the market towards adopting GenAI for business growth and its productivity enhancing capabilities, for 2024, according to Paolo Dal Sin, global head of Accenture Security.
[Related: Accenture To Train 30,000 Staff On Nvidia AI Tech In Blockbuster Deal]
Ultimately, Accenture sees GenAI as an opportunity “to develop new services and re-invent existing services” within the cybersecurity sector, Dal Sin said.
The launch includes Accenture’s announcement of several new Cyber ​​Future Centers around the world focused on GenAI and security, including a new US center in Washington, D.C. It said each center will employ more than 100 experts.
To safeguard the adoption of GenAI, Accenture is unveiling its Secure AI Solutions offering, focused on enabling organizations to mitigate the risks of data exposure and address vulnerabilities in AI models so organizations can adopt new technologies. To avail benefits safely.
For example, according to Dail Sin, Accenture will now offer a GenAI security diagnostic tool that it previously piloted internally for its own use.
The tool’s capabilities include identifying vulnerable data lakes and unauthenticated fundamental models, he said, while also providing the ability to shield fundamental models from rapid injection attacks using an approach similar to an “AI firewall.”
To protect against AI-powered attacks, Accenture’s new services will include protection capabilities against deepfakes, which are increasingly being used for phishing and social engineering.
Key components will include a service co-engineered with deepfake detection startup Reality Defender, which Accenture has backed as an investor. “I believe there is nothing available yet on the market” that is comparable, Dal Sin said.
Meanwhile, Accenture is also enhancing its existing cybersecurity services using GenAI, including its managed detection and response (MDR) offering.
According to Del Sin, the company’s security teams are taking advantage of AI assistants that can better collect and analyze threat intelligence, ultimately significantly improving risk correlation.
Using these capabilities, he said, “there has been a material improvement in the effectiveness” of providing MDR to customers.
Another example is on identity security, where Accenture has developed an agent that can dramatically improve the speed of user provisioning and access control, Del Sin said.
“It’s something we’ve never seen before, because it was a very people-centric service,” he said.

Netrio and Success Computer Consulting buy New York MSP as part of ‘aggressive’ acquisition strategy


‘I see us continuing to grow, both organically and through M&A. “We will expand our service offerings, particularly in AI and automation, which will be key to our long-term strategy,” says Mark Kleiman, CEO of Netrio and Success Computer Consulting.

As Netrio and Success Computer Consulting Alliance There is more to come, CEO Mark Kleiman says of their strengths in building a cybersecurity and AI powerhouse.
In fact, on Monday the merged company acquired Buffalo, NY-based MSP PCA Technology Group, a partner of Microsoft, HP, Apache, Scale Computing and others.
“Our M&A Strategy There are two tracks. One is to grow our regional presence by bringing MSPs with strong brands and service offerings to specific regions,” Kleiman told CRN. “The goal is to merge those regional players with our national infrastructure to create a more robust and scalable offering.”
PCA brings a deep understanding of business needs and a customer-centric approach that closely aligns with the combined company’s focus on providing solutions that drive innovation, efficiency and growth for customers, while reducing costs and cyber security risks. Also reduces. Terms of the deal were not disclosed.
Although the companies have not disclosed the number of employees coming into the acquisition, a spokesperson said the combined companies – Netrio, Success and PCA – will have a workforce of more than 250.
“We’re not just focused on helping businesses become more secure or more efficient, we’re thinking about how to help them operate better, how to manage data more effectively,” Kleiman said. and how to stay ahead of the technology curve.”
Watch CRN’s interview with Kleiman on M&A, cybersecurity and how clients can get more value from the combined companies.
As CEO, what is your number one mission going forward for the combined company?
My main focus is to help the organization create a clear strategy for growth. In my opinion this growth happens in two parts. First, we want to see organic growth by leveraging our existing services, market access and the skill sets we already have. Additionally, we are looking for ways to expand our offerings, whether through new services or acquisitions. However, the challenge with M&A is cultural integration. It’s one thing to bring in another great company, but the real challenge is uniting those teams, aligning our approach to customers, and building a cohesive organization.
What do you consider your biggest challenge going forward?
The biggest challenge is to manage growth without disrupting the organization. We want to be aggressive, particularly with M&A, but we can’t let it distract us from our day-to-day operations. Change is always hard for people, it can be both exciting and nerve-wracking. There needs to be a balance between pursuing acquisitions, adding new services, and ensuring that our teams remain focused on the work at hand. It’s all about keeping the machine running while we keep adding new pieces to it.
You have mentioned M&A several times. What is your M&A strategy going forward?
Our M&A strategy has two tracks. The first is to increase our regional presence by bringing in MSPs with strong brands and service offerings in specific regions. The goal is to merge those regional players with our national infrastructure to create a more robust and scalable offering. We will also focus on enhancing our core services such as centralized NOC (Network Operations Center) and service desk, but we want to maintain the local touch that makes these regional businesses valuable to their customers.
What else do you want from your vendor partners?
Lead generation is a major question of our seller partners. It is a symbiotic relationship. We rely on our partners to provide high-quality products and services to our customers, but we also want them to actively work with us on go-to-market efforts. This means identifying leads and helping us expand our reach. A good partnership goes beyond just providing a product, it’s about working together to create opportunities.
Small and middle-market companies often face unique challenges. Which technology solutions do you think are most important to them at this time?
For small and mid-market companies, the focus is still on fundamentals, operational efficiency, stability and security. They need platforms that are secure, reliable and don’t bog them down with unnecessary issues like frequent outages or complicated patches. But we also help them identify areas where they can improve, especially around security, compliance and governance. And increasingly, we are helping these businesses think about AI, automation, and public cloud as part of their long-term strategy. The goal is to help them become better operators, not only to survive, but also to thrive in an increasingly competitive environment.
Speaking of AI, how does the combined company plan to take AI to market?
AI is definitely a big focus for us. Although the concept of AI is not new, its adoption is still in its early stages, especially for small and mid-market companies. Many of our customers are starting to learn how AI can help them improve operations, but there are still things to learn. They need to understand not only where AI fits into their business, but also the real benefits it can bring. Our goal is to help bridge that gap by introducing AI solutions that improve both the way we provide services and the way our customers work. We are investing in a range of AI platforms and solutions, but I believe the future will be a mix of different technologies to suit the specific needs of our customers.
What differentiates Netrio and Success from competitors in the market?
One of our biggest differentiators is the combination of a strong regional presence with a national scale. We have the local expertise and relationships, but we are also able to bring to bear the resources and infrastructure of a larger organization. The breadth of our services, from traditional infrastructure support to cutting-edge solutions like AI, data management and security, is another key differentiator. We’re not just focused on helping businesses become more secure or more efficient, we’re thinking about how to help them operate better, manage data more effectively and stay ahead of the technology curve. How to stay ahead?
Where do you see the company in the next three to five years?
I see us continuing to grow both organically and through M&A. We will expand our service offerings, particularly in AI and automation, which will be key to our long-term strategy. Security and governance will always be at the core of what we do, but we also want to evolve with the changing needs of our customers. Over the next few years, I hope to see us not only maintain our position in the market, but also differentiate ourselves by offering innovative solutions that help our customers become better operators in their own right.
Ultimately, how do you see the merger of Netrio and Success creating more value for your customers?
The merger allows us to offer a more comprehensive range of services while maintaining the personalized, regional approach that our customers value. We can bring our national scale and infrastructure, while also providing the local touch and deep industry knowledge that clients rely on. For our customers, this means more choice, more innovation and better support as they grow and face new challenges. The combined company will be able to deliver advanced solutions such as AI and data management without sacrificing the fundamental services needed to operate securely and efficiently. It is about creating more value for them in a rapidly changing landscape.

Cloudera expands data lineage, metadata management capabilities with Octopi acquisition

Cloudera has also introduced new AI assistants to help data scientists, data engineers, and developers increase productivity and streamline data workflows.

Data platform giant Cloudera has struck a deal to acquire data lineage and data catalog technology developed by Israel-based octopi To extend Cloudera’s data catalog and metadata management capabilities for data analytics and AI functions.
The acquisition comes as businesses and organizations are looking for ways to use their data for AI, machine learning and predictive analytics initiatives – a move that requires finding and managing all the relevant, relevant and reliable data. Is required.
Managing metadata automatically to provide a unified view of data has become more complex as data increasingly spreads across distributed data architectures, including hybrid and multi-cloud environments. Data security and governance have also become more complex.
[Related: Cloudera Teams With Nvidia To Create New AI Inference Service]
“When using data to make business-critical decisions, enterprises cannot afford blind spots or inaccuracies, and they must surely slow progress from identifying reliable data,” Cloudera CEO Charles Sainsbury said in a statement. This should not be allowed to happen.”
“Our customers need to automatically discover data across multiple repositories, show deep lineage of assets both within and outside the Cloudera estate, and leverage a robust data catalog to identify data assets that can be consumed . The acquisition of Octopi’s platform enhances Cloudera’s data, analytics and AI platform, enabling customers to gain greater visibility of their data regardless of their data management provider,” Sainsbury said.
cloudera said it has signed a definitive agreement for the deal and expects the transaction to be completed by the end of this month. Terms of the acquisition were not disclosed. While there is a banner on Octopie’s website stating that it has been acquired by Cloudera, Cloudera’s statement says that it is only acquiring the Octopie platform.
Octopi, founded in 2016, is based in Rosh HaYayan, Israel, with its U.S. headquarters in Wilmington, Del.
The company’s platform leverages data mapping and knowledge graph technology to power its automated data discovery, multidimensional data lineage, data catalog, and impact analysis capabilities.
According to Cloudera, with the addition of Octopi technology to the Cloudera platform, customers can expect improved data discoverability, data quality, data governance, and data migration support capabilities.
“Cloudera and Octopi represent a perfect symbiosis by bringing together centralized data and metadata management,” Octopi CEO Yael Ben Ari said in the statement. The significant challenge of understanding and controlling data in multi-cloud and on-premises environments.
In additional news, Cloudera, part of crn 2024 big data 100has also launched Cloudera Copilot for Cloudera AI, which the company said will provide “secure and intelligent” assistance capabilities to help data scientists, data engineers, and developers increase productivity and streamline data workflows. Does.
According to the company, Cloudera Copilot improves reproducibility across projects, ultimately helping enterprises bring reliable data, analytics, and AI applications into production faster.
Specifically, the new offering automates code creation, data transformation, and troubleshooting tasks; Provides ongoing coding support; And it includes on-demand guidance, optimal solutions, and insights to maintain high coding standards.