Nvidia finance chief Colette Kress said in her third-quarter remarks before CEO Jensen Huang addressed the issues with Blackwell, “There are some supply constraints in both the Hopper and Blackwell systems, and demand for Blackwell will remain the same for several quarters into fiscal 2026.” Expected to exceed supply by 2020.
Nvidia CFO Colette Cress said the company is “racing to deliver at scale to meet the incredible demand” for its recently launched Blackwell GPUs and related systems, which are designed to improve the performance and efficiency of Generator AI. It has been said to take a big leap.
Cress made the comments during the AI computing giant’s Wednesday earnings call, where the company revealed that it third quarter revenue Nearly doubled to $35.1 billion, mainly due to continued high demand for its data center chips and systems. [Related: Nvidia Reveals 4-GPU GB200 NVL4 Superchip, Releases H200 NVL Module]
In prepared remarks at the latest earnings, Kress said that Nvidia is about to start shipping blackwell productsNow in full production as of January, it plans to increase those shipments in its 2026 fiscal year, in line with the following months.
“Both the Hopper and Blackwell systems have some supply constraints, and demand for Blackwell is expected to exceed supply for several quarters into fiscal 2026,” they wrote.
On the call, Kress said Nvidia is “on track to exceed” its previous Blackwell revenue estimate of several billion dollars for the fourth quarter, which will end in late January, as its “visibility into supply continues to increase.” ”
“Blackwell is a customizable AI infrastructure with seven different types of Nvidia-made chips, multiple networking options, and air- and liquid-cooled data centers,” she said. “Our current focus is on meeting strong demand, increasing system availability and providing our customers with the optimal mix of configurations.”
During a question-and-answer session of the call, a financial analyst asked Nvidia CEO Jensen Huang to address Sunday report By industry publication The Information, which details some customer concerns about overheating of Blackwell GPUs in the most powerful configuration of its Grace Blackwell GB200 NVL72 rack-scale server platform.
The GB200 NVL72 is expected to serve as the basis for upcoming major Nvidia AI offerings from major OEMs and cloud computing partners, including Dell Technologies, Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud.
In response to a question about The Information’s report on overheating of the Blackwell GPUs in the GB200 NVL72 system, Huang said that while Blackwell’s production is “all in” with the company exceeding previous revenue projections, he added that engineering What Nvidia does with OEMs and cloud computing partners is “rather complicated.”
“The reason is that although we build the full stack and the full infrastructure, we separate all the AI supercomputers and we integrate them into all the custom data centers and architectures around the world,” he said on the earnings call. Said.
“That integration process is something we have done for generations. We’re pretty good at it, but there’s still a lot of engineering to be done at this point,” Huang said.
Huang noted that Dell Technologies announced Sunday that it has begun shipping its GB200 NVL72 server racks to customers, including emerging GPU cloud service provider CoreWeave. He also mentioned the Blackwell system that is being built by Oracle Cloud Infrastructure, Microsoft Azure and Google Cloud.
“But as you can see from all the systems that are being put in place, Blackwell is in very good shape,” he said. “And as we mentioned earlier, the supply and turnaround we plan to make this quarter exceeds our previous estimates.”
Addressing a question about Nvidia’s ability to execute on its data center road map, which moved to an annual release cadence for GPUs and other chips last yearHuang remained steadfast in his commitment to the accelerated production plan.
“We are on an annual roadmap, and we are looking forward to continuing to execute on that annual roadmap. And by doing so we increase the performance of our platform. But it’s also really important to understand that when we’re able to increase performance and do so at X factors at a time, we’re reducing the cost of training, we’re reducing the cost of inference, and we’re reducing “We are reducing the cost of AI so that it can be more accessible,” he said.
Partner’s opinion on Nvidia growth, investors’ reaction
After Nvidia released its third-quarter earnings, investors reacted, sending the company’s share price down more than 1 percent in after-hours trading.
While the company beat Wall Street expectations on revenue by nearly $2 billion and beat the average analyst estimate for earnings per share by 6 cents, its fourth-quarter revenue estimate came in just slightly higher than Wall Street expected. Was.
Andy Lin, CTO Top Nvidia Partners Mark III Systems in Houston, Texas, told CRN that while demand for Nvidia’s data center GPUs and related systems is “obviously incredibly strong,” it has set a “such a high level” for itself, with triple-digit growth in multiple quarters. have set.
“These numbers are still surprising, especially on a year-over-year comparison. But this is clearly a smaller increase than before,” he said.
However, Lin said, some customers are holding off on making any purchases right now because Nvidia is switching from Hopper-based systems to Blackwell-based systems.
“There are certainly a number of organizations that we look at that certainly haven’t spent [money on new infrastructure and are waiting] On Blackwell. So the question is, how many of them, at what scale, and what will that look like? And I think that might be something that the market is probably underestimating a little bit,” he said.
“The time to result, the time to value on this is significantly reduced (with Apache Private Cloud AI),” says Abdi Goodarzi, principal at GenAI, Deloitte Consulting LLP. “We have our full commitment with the (Deloitte-Nvidia-HPE) partnership to bring this to the enterprise as quickly as possible.”
Deloitte – a $67.2 billion global technology strategy, tax and systems integration giant – is building Bonanza, a GenAI agent application based on Hewlett Packard Enterprise’s private cloud AI service.
The first of a wave of Deloitte GenAI agent applications based on Apache Private Cloud AI was unveiled at Apache Discover Barcelona as part of an expanded collaboration agreement between Deloitte, Apache and Nvidia, which will be powered by Nvidia AI Computing by the Apache Private Cloud AI service. Was focused on.
“We’re going to create (AI) agents for almost every part of the business, from the front office to the back office to the center of the business,” said Abdi Goodarzi, principal and leader of GenAI products and innovations at Deloitte Consulting LLP. In an interview with CRN. “This is basically the wave of the future. This is the era of AI enablement. We are investing huge amounts in this area. Our aim is to help our customers build more efficient businesses.”
The first of the Apache Private Cloud AI-based GenAI agent offerings is Deloitte C-Suite AI for CFOs. That GenAI Agent offering provides GenAI financial analysis, modeling and competitive market analysis to chief financial officers.
Another new offering from Deloitte, Atlas AI, is a GenAI solution designed to accelerate life science discoveries focused on molecular modeling and drug discovery.
Deloitte LLP has a 2025 product road map based on the Apache Private Cloud AI service that includes a “significant number” of new GenAI agents in the finance sector, Goodarzi said.
“In 2025, we are going to focus on supply chain, sales, services, marketing and tax (AI agent applications),” Goodarzi said. “Those are areas that many customers have told us they would welcome these advanced capabilities and enhancements to.”
Deloitte is leading the way with a broad spectrum of GenAI agent apps, from simple agents to assist clients to advanced agents that process large amounts of data.
The Deloitte GenAI Agent app covers dozens of categories ranging from many aspects of aggressive financing, such as invoice processing or sourcing and procurement, to planning, product lifecycle management, warehouse management and supply chain with logistics, Goodarzi said.
Virtually every industry has business processes like claims processing for insurance companies that can be accelerated with the use of GenAI, Goodarzi said. He said the net effect of such insurance claims AI agents will be increased customer satisfaction and increased sales.
Goodarzi called the Apache Private Cloud AI Platform a “significant game changer” that will bring dramatic improvements to all types of business processes. “This is a significant commitment for Deloitte to partner with Nvidia and Apache,” he said. “We have an AI factory with thousands of developers and thousands of AI practitioners, data scientists who understand how you can bring these ideas and innovations to life. “Our goal is to enable this as quickly as possible.”
The Deloitte GenAI agent design and architecture is based on very “short cycle” development times, Goodarzi said, with the aim of rapidly delivering AI benefits to customers.
“The time to results, the time to value on this (with Apache Private Cloud AI) is significantly reduced,” he said. “We have our full commitment with the (Deloitte-Nvidia-HPE) partnership to bring this to the enterprise as quickly as possible.”
The Apache Private Cloud AI service with Apache GreenLake offers at least a “five to ten times” cost advantage over the standard infrastructure offering, Goodarzi said. “How do you do this if you don’t have the power of PC AI?” he asked. “You will need a lot more computing power. You need a lot of data. You need a lot of networking.”
In fact, Goodarzi said that Apache Private Cloud AI allows Deloitte clients to achieve faster times to results. “On day one you can actually get results from the (HPE) PC (private cloud) AI once it’s connected to your system,” he said. “Obviously there is a shorter deployment cycle to connect PC AI to your data and integrate it with your security… Once connected it starts consuming the data and understanding the data and generating results immediately.” Starts doing it. No other system can do this!”
Cherry Williams, senior vice president of GreenLake Flex Solutions and Common Services, said that Deloitte C Suite AI was enabled in less than three days.
Additionally, Williams said Deloitte has added two additional use cases on PC AI. “The ability for Deloitte to enable these use cases on PC AI at such a fast pace means they are then able to offer similar or even faster deployment of AI use cases to their clients,” he said.
Williams said, “One of the challenges customers face is really enabling their GenAI use cases and they’re turning to partners like Deloitte to help them figure out how to enable those use cases.” be enabled.”
Williams said the partnership with Deloitte will ultimately allow Apache to reach and “empower” much more customers with GenAI-based services.
Goodarzi predicted that 2025 will be a key year for customers to leverage their enterprise data to create GenAI-based agents. “Customers need to think about enterprise data strategies and platforms and how they can prepare themselves for the arrival of silicon and infrastructure that could give them a significant boost,” he said.
Goodarzi emphasized that talented employees will be at the center of the new wave of GenAI agent application offerings. “Humans are at the center of it and will always be at the center of it,” he said. “So get your data right. Get your talent right. Get your risk and regulatory expectations right. (HPE) Invest in PC AI and other infrastructure. By then you’re able to really activate all these agents and other innovations and harness them to accelerate your business and really become disruptive in your industry.
As the data security startup announced more funding and a $3 billion valuation, CEO Yotam Segev tells CRN that Cyera plans to continue consolidating more data and AI security tools onto its platform, ultimately providing customers with a ‘unified view of risk.’
Cyera plans to deploy its new $300 million round — the data security startup’s second fundraise of that amount this year — to continue expanding its data and AI security capabilities with the aim of becoming the most comprehensive platform in a red-hot market, Cyera Co-Founder and CEO Yotam Segev told CRN.
The three-year-old company announced the Series D round of funding Wednesday, which brings the startup’s valuation to $3 billion. That’s up from its $1.4 billion valuation achieved in April, when New York-based Cyera raised its prior $300 million round. [Related: 10 Cloud, Data And Identity Security Startups To Watch In 2024]
In an interview with CRN, Segev said that the company’s data security posture management (DSPM) offering has seen massive demand from customers and partners amid the widespread enterprise push to adopt generative AI.
Cyera’s tool specializes in rapidly providing visibility into the status of an organization’s data and identity access — something that has seen surging interest as a means to enable usage of GenAI applications such as Microsoft 365 Copilot.
This need for securing data against exposure in a GenAI world has proven highly complex, however, in part because data is held in so many different places and the access to that data is often misconfigured.
Cyera’s DSPM technology aims to simplify matters with an agentless approach and through its ability to work across cloud environments, SaaS, data lakes and on-premises environments. The result is a “unified view of risk,” which enterprises have always wanted but have never been able to achieve, Segev said.
Cyera has not been content to stick with its core area of DSPM, though, and has recently expanded into its second major category with a move into data loss prevention (DLP). In October, the company acquired Trail Security for $162 million, which Segev said has brought a unique AI-powered approach to DLP onto the Cyera platform.
When combined with Cyera’s DSPM capabilities, customers now “can actually go and build a data security program” that is truly effective, he said.
“Suddenly, you’re able to actually make DLP work — because you know what you’re trying to protect, and because you know what the crown jewels are and where they reside,” Segev said.
With the help of the new funding, Cyera plans to continue enhancing its DSPM tool as well as adding further capabilities within the DLP sphere, he said.
From there, the company expects to continue introducing new functionality — including in areas such as privacy as well as in governance, risk and compliance (GRC) — so that it can cover as many data security needs as possible for customers, according to Segev.
The ultimate aim is “to consolidate the space and bring simplicity to a space that’s very siloed and complex,” he said.
And Cyera is planning to keep moving fast — to the point that “a year from now, I’d like to see enterprises that are running their data security program on Cyera, from DSPM to DLP to AI security to governance, risk and compliance, to privacy operations, to identity data access governance,” Segev said. “I want to see enterprises, big enterprises, that are running their program end-to-end on Cyera.”
During the interview with CRN, Segev also discussed Cyera’s channel strategy for 2025. Current partners of Cyera include GuidePoint Security, World Wide Technology and Trace3.
The new funding was led by Accel and Sapphire Ventures, while other participating investors in the round were Sequoia, Redpoint, Georgian and Coatue. Cyera has now raised a total of $760 million since it was launched in 2021.
What follows is an edited portion of CRN’s interview with Segev. What has 2024 been about for Cyera?
We’ve had a very, very good year. The promise of AI changing, transforming and upgrading the enterprise has been a huge catalyst for us. Because when you think about AI, it fundamentally runs on two things — it runs on GPUs and it runs on data. And I think the lack of visibility, the lack of control, the lack of management of data in the enterprise, has really been exposed through this AI transformation.
To give an example that everybody can relate to — you work in a big enterprise, maybe you have access to data you’re not supposed to have access to in SharePoint or Office 365. That’s been a long-standing problem, but how would you ever get to it? How would you find it? And suddenly with [Microsoft] Copilot, you can query Copilot, “Who’s got HR violations in that company, and exactly what are they?” And if you have access to that information, you’re going to get an answer in five seconds with the exact details you wanted to find out. So the game has changed on data security. What is your biggest differentiator from competitors when it comes to providing visibility into data?
Technologically, there’s two things that we’ve done that are unique and powerful in the market. The first one is the AI-powered classification. So essentially, it’s being able to learn the customers’ unique data types and contextualize those data types. Because it doesn’t end with, “what is the data?” There’s another set of questions you want to ask that, if the AI can answer them for you, you’re in a much better place and you can take much more action. Whose data is it? Is it synthetic data or is it real data? Does it belong to a European citizen or a German citizen or a Canadian citizen? And the AI can do much of that for us.
The second is the agentless, cloud-native connectors, which allow us to connect once to the underlying cloud provider — and from that single point of integration, unlock all of the different databases, buckets and warehouses that you have inside. Whereas in the past, you had to connect one by one over the network. That was extremely complex. A network-based approach that’s not agentless and not cloud-native just isn’t able to access the data in the different accounts. You have to be able to connect over the API into the accounts. Otherwise, it’s just not going to work. So this is a huge differentiator in the market. So when an organization is thinking about all the things they need to do to keep their cloud secure, Cyera would be a piece of that?
I think that’s right — especially when you tie it into identity, like we have. [Before] the move to the cloud, from a consumption model perspective, [organizations] used to own all the layers of the stack. Then we gave the infrastructure away, and we gave the network away. We gave away more and more layers of the stack. And at the edge of it, we ended up with SaaS. And what do we control in SaaS? What data we put in it, and what access we grant to that data. And those layers are consistent across the entire shared responsibility model with the cloud providers — data and access to data. And that’s where Cyera is adding a ton of value to customers. We have customers today that have us connected to over 10 different environments — so their AWS, GCP, Azure, Snowflake, MongoDB, Databricks and their Office 365 and some Google Drive they got through acquisition. And some on-prem databases and fileshares that are left from the olden days. And Box and Salesforce. And so if you get a unified view of risk, and you’re able to build workflows across all of these systems, the impact of that is outstanding.
I remember when we started, I spoke with the head of security operations for Wells Fargo. He told me, “Yotam, you know what I hate about these SaaS security vendors? They show me the risk in Salesforce, and they show me the risk in Office 365, and they show me the risk in Atlassian. What do they think — that I have a security person for Salesforce and I have a security person for Office 365? Just show me the top 10 risks across all of my environment. And show me how to solve them.” And that’s exactly what Cyera is doing. When you look at cloud security, of course, you have the infrastructure and vulnerability aspect. But then you have the data and access aspect — what data lives where and who has access to it. And that’s such a big part of protecting these new ecosystems. How is the Trail acquisition and your introduction of DLP expanding the opportunity for Cyera?
When I look at Cyera’s purpose in the industry, we want to be the one-stop shop for data security. We want to be the household name for data security. DSPM is such an amazing core to this. Because DSPM provides you with what we never had before — which is a full, comprehensive, automatic inventory of all the data, and specifically all the sensitive data, across the enterprise. With that, you can actually go and build a data security program. And suddenly, you’re able to actually make DLP work — because you know what you’re trying to protect, and because you know what the crown jewels are and where they reside. And you can build policies to either keep them there or to prevent them from going places where they’re not supposed to go, or to monitor how they’re moving in the environment. That was the biggest challenge with DLP. We were always able to do DLP for the most simplistic data types, and even that with a pretty high false positive rate. But what about the complicated data types? How do we deal with them? Whether it’s documents of different sorts, intellectual property, PII. How can we really build policies around those types of data — where they can reside, where they can go, who’s allowed to access them? And that’s where the Trail acquisition has really enabled us to go from DSPM to what we’re calling a data security platform — and not just show you where the data is and help you remediate it at rest, but also track it in motion, detect any violations in motion, and be able to put the preventative controls in place to make sure that you’re not subject to these types of incidents. Is that opening some new doors for you?
Absolutely. Last week we had our first user conference, and the biggest piece of feedback that every practitioner was telling us is, “I can’t wait to go back to my CFO and tell him that it’s not just DSPM. It’s DSPM plus DLP.” It’s taking care of both of these problems in one unified platform.
This is just the beginning. This product portfolio is going to continue to grow. In conversations [with customers], the one thing you always hear is how many different use cases they need to deal with. And today, they have to stitch together 30-plus products in order to really build the program. That’s impossible to do. That has to be simplified, it has to be unified. And that’s where we’re leveraging the venture capital investment in order to consolidate the space and bring simplicity to a space that’s very siloed and complex. What are some of the other goals for the funding?
This funding is allowing us to really invest in the product — not just in DSPM, which is our core and we’re doubling down on, but also to go into DLP through this acquisition, and to go into the identity space and answer the questions around data access governance. [We’ll] continue to make moves into adjacent spaces where the customers are asking us to consolidate these use cases and these requirements into one platform. And to have the resources to actually do that is amazing. Are there other categories or tools that you’re interested in consolidating on the platform?
First of all, I think that in the DLP space, there’s a lot of depth. It’s not just one product or one solution. The Trail team is going to grow within Cyera to build multiple product lines in the DLP space. At the same time, we’re seeing a lot of requests from our customers to answer many of the other core data use cases — whether that’s the privacy requirements around their data, whether it’s GRC and compliance requirements around data, evidencing to auditors, compliance checks, compliance health. [These are] a lot of things that today are not part of what Cyera is putting out in the market, but that will quickly become [the case].
And of course, the biggest topic today is AI security. How do we protect an LLM that’s been built in the enterprise, and we want to put real data into it? And how do we control what happens to that data that the LLM spews out? What is your channel strategy for 2025?
We’re taking a focused approach with our key partners. We want to make sure that we’re fully enabling their teams and getting into the market together with a high-touch approach. That’s where we are for next year in this regard. And I think that as we continue to mature that program and mature those relationships, we’ll be able to open up to more partners and really widen our scope of interaction to the wider channel community.
The channel, the value-added resellers, are amazing at supporting these implementations, adding value on top of these implementations — and really making sure that the customers are not just getting a product, but getting a solution to the problem they set out to solve. And we’re very fortunate to be working with [our current] partners. Overall what do you see as the big theme for Cyera in 2025?
I think for us, DSPM is obviously very, very hot right now, and every enterprise is looking at a DSPM initiative. And that is the core of our business. That is where we’re spending the majority of our time and that’s where the majority of the development is.
Around that, the exciting avenues for us are obviously growing into the DLP space and challenging some of the incumbent vendors in that space with their offerings, and bringing to the table DLP that is AI-powered. And that is game-changing, and that’s exactly what Trail is about. We also have the ability to provide agentless DLP, which has a completely different time to value than the traditional DLP that the market is used to. Everybody wants DLP to be easier, and this is exactly that. This is the magic sauce that turns DLP into a very successful program quickly.
Around that, we have our identity focus and we have our AI security focus. Being able to answer the question — “who can access what data in the enterprise?” — it’s mind blowing. Practitioners have never seen a platform that is able to actually answer that question for them.
And on the AI security front, I think that every CISO is being challenged to support this huge transformation that’s happening. And the ability to really partner with the business, enable the business to move fast with AI, to run ahead — but to do it in a way that’s governed, secure, compliant — that’s also extremely meaningful to our customer base. And we’re very proud to be at the forefront of this challenge. Looking out a year from now, what is your hope for Cyera and what things will be like at that point?
A year from now, I’d like to see enterprises that are running their data security program on Cyera from DSPM to DLP to AI security to governance, risk and compliance, to privacy operations, to identity data access governance. I want to see enterprises, big enterprises, that are running their program end-to-end on Cyera. That would be very, very exciting for me.
The hottest networking startups of 2024 focused their energies on the edge, private 5G, multi-cloud networking, and AI.
Newcomers to networking are carving out a niche for themselves as several trends are gaining momentum and enterprises look to refresh their infrastructure.
For starters, edge, multi-cloud and private networking capabilities are increasingly becoming necessities for many enterprises grappling with hybrid work or changing their physical office blueprint to better adapt to how their employees are working and doing business today. We are doing it to better match the reality. Also, AI has become not just a buzzword but a new reality. As AI use cases become more prevalent, especially in operational environments, being able to accommodate new computing and storage requirements, or adding AI in new locations on the network and at the edge of the network The networking infrastructure needs to change frequently.
Edge and private networking startups have exploded onto the scene, as well as cloud networking upstarts that are providing a new way for enterprises to achieve secure networking through consumption-as-a-service and consumption-based models so that businesses can Don’t have to break. Banks are modernizing their networks. To that end, many of these market newcomers are raising new rounds of funding to help them expand their reach and continue their innovative work.
From those specializing in multi-cloud-based offerings and networking-as-a-service to edge networking and AI, here are the 10 hottest networking startups of 2024.
Alkira
Upstart Alkira specializes in agentless, multi-cloud networking. The San Jose, California-based company emerged from stealth mode in 2020 with its consumption-based Cloud Services Exchange (CSX), an integrated, on-demand offering that lets cloud architects and network engineers build and deploy multi-cloud networks Is. minutes. Since then, the company has unveiled a collaboration with the Microsoft for Startups program, as well as established a deeper relationship with Amazon Web Services, whose marketplace also includes Alkira CSX.
Network Infrastructure-as-a-Service Specialist in October launched its cloud-based Zero Trust Network Access (ZTNA) serviceNetwork Infrastructure-as-a-Service, an offering that will further simplify security and networking for enterprises, according to the startup.
Alkira announced the closing of a $100 million Series C funding round in May, bringing the company’s total funding raised to date to $176 million.
Avis Network
Founded in 2019, Avis Networks has been hard at work innovating on its brand of open networking software for cloud-scale infrastructure. According to San Jose, California-based Avis, the company specializes in building open, cloud and AI-first networks that prioritize choice, control and cost savings.
The company introduced its One Data Lake earlier this year, launched CoPilot, a generative AI conversational network, and expanded its packet broker offering for applications and 5G General Packet Radio Service (GPRS) Tunneling Protocol (GTP) use cases, among other advancements. Upgraded.
Avis, which counts several major investors including Cisco Investments as backers, also has a partner program for reseller and distributor partners.
cape
Cape, founded two years ago, bills itself as a privacy-first mobile carrier that focuses on connecting people without compromising security and privacy. The carrier offers nationwide 5G and 4G coverage while preventing hackers and spam.
Cape competes with existing US-based carriers including AT&T and Verizon. The company partners with US Cellular on the physical infrastructure, runs its own voice service on top, and it runs its own mobile core.
In April, Cape raised $61 million in funding from A*, Andreessen Horowitz, XYZ Ventures, X/Ante, Costanoa Ventures, Point72 Ventures, Forward Deployed VC and Karman Ventures.
Selona
Wireless specialist Celona burst onto the networking scene four years ago with a platform that lets enterprises build and deploy 5G/4G LTE private networks, filling a huge gap in the wireless connectivity market at the time , interest in personal networking in particular had increased dramatically. venture over the years.
The Cupertino, California-based company enters the market through a strategic partnership with Apache Aruba for the resale of Celona’s cellular products. The channel-friendly company also works with partners through Solution Provider Partner ProgramIn October, Celona announced AirLock, a new suite of security capabilities for private 5G wireless network security.
Celona’s most recent – and oversubscribed – funding round was a $60 million Series C round in 2022.
alicity
Elisity is an expert in network segmentation that leads the market with its IdentityGraph technology.
According to the company, the San Jose, California-based startup’s identity-based microsegmentation technology helps enforce granular controls over users and devices, allowing organizations to employ network segmentation to protect against threats and limit the blast radius. Gives.
Alicity was founded in 2018. Company CEO James Winebrenner joined the company in 2020 after about a year with multicloud networking player Aviatrix Systems. Elicity also has a partner program for solution providers and system integrators.
In April, Alicity raised a $37 million Series B round of funding from Insight Partners. The company said it is using the funding for AI capabilities to predict and prevent cyber threats.
Highway 9 Network
Santa Clara, California-based upstart Highway 9 Networks offers a cloud-native platform that the company said is built for enterprise mobile users, applications and AI-powered devices.
According to Highway 9, the startup’s Edge product provides distributed network functions with integration with enterprise IT and major telcos.
Highway 9 also works with partners through its Mobile Cloud Alliance program to bring its connectivity and private 5G to end customers.
Highway 9 launched in stealth mode in February this year and revealed that it had raised $25 million in funding from Mayfield, General Catalyst, and Detroit Ventures.
Natalie
NetAlly is on a journey. The company began as a business unit of Fluke Networks. It was then part of NetScout Systems and became a standalone player until 2019. Today, NetAlley comes to market with its portfolio of switching, wireless, IP surveillance, storage, and security products.
NetAlly has approximately 50,000 global customers in 70 countries. The company does 100 percent of its business through channel partners, which includes approximately 300 solution provider partners globally, about 30 percent of which are MSP partners, the company told CRN in October. That same month NetAlly brought on Jeff McCulloughA 25-year channel veteran, as the new vice president of sales for North America.
Indigo
Nile, the next-generation networking services platform provider backed by former Cisco CEO John Chambers, came out of stealth mode in 2022 with its “reimagined” wired and wireless service, delivered entirely as a service. The company’s Enterprise Network as a Service (NaaS) offering aims to bring another option to the market for businesses that is different from what many of the market giants like Cisco are offering today.
The San Jose, California-based startup in March launches its full AI services platform With AI applications aimed at automating network design, configuration, and management. Nile AI Architecture includes Nile Services Cloud, which includes AI-based network design; Nile Service Block, which automates network deployment, including Nile Copilot and Nile Autopilot applications for access point configuration and AI-based network monitoring and operations.
In 2023, Nile raised $175 million in a Series C funding round, bringing its total funding to $300 million.
Proximo
Multi-cloud networking disruptor Prosimo emerged from stealth in 2021 with its Application eXperience Infrastructure (AXI) platform, which is modernizing and simplifying application delivery and experience in multi-cloud environments. According to the company, the Proximo platform can co-exist with existing vendors in a customer’s environment or be used to replace certain tools and features, such as zero trust or cloud peering.
The Santa Clara, California-based startup in June Security expert joins Palo Alto Networks To further secure application access in multi-cloud environments. Prosimo’s head of marketing told CRN that the partnership was important as the company looks to partner with more Fortune 500 companies.
The company raised $30 million in Series B funding in its latest funding round in 2022.
recognize
Upstart Recogni, which focuses on AI-based computing, builds compute systems that can provide multimodel GenAI inference for data center environments. The company said it has noticed a problem that many generative AI systems today are inefficient and consume too much power. San Jose, Calif.-based Recogni’s technology, however, is helping meet the larger compute requirements of AI workloads. Recogni’s latest $102 million Series C funding The round was co-led by Celesta Capital and Greatpoint Ventures in February. Juniper Networks revealed in November that it had invested in the startup as part of a Series C funding round.
Harvey partners told CRN that they have already made significant sales efforts around the Apache-Juniper combination and that an antitrust challenge would put millions of dollars in potential networking and AI revenue gains at risk.
Representatives from Hewlett Packard Enterprise and Juniper Networks met with U.S. Department of Justice (DOJ) regulators last week to fend off an antitrust challenge. Harvey proposes $14 billion acquisition of JuniperAccording to Bloomberg news report.
According to Bloomberg’s report, DOJ officials “are prepared to challenge the deal if necessary and have made their concerns known to Apache and Juniper”, but no final decision has been made on whether or not to bring a lawsuit.
“Hewlett Packard Enterprise and Juniper Networks are working with regulators to obtain the necessary approvals for our proposed transaction and expect the deal to close in late 2024 or early 2025,” said VMware in a statement in response to the Bloomberg story. By then it will be completed.” “We believe the transaction will create significant benefits for customers and fundamentally change the dynamics of the networking sector for the better by fostering greater competition and innovation.” [RELATED:HPE CEO On Juniper Merger, AI, Battling Cisco, And John Chambers’ Advice]
CRN also contacted Juniper Networks and the DOJ, but did not hear back by press time.
Bloomberg reported that a decision on whether to challenge the deal could be made as soon as this week. The media outlet also reported that Apache and Juniper “may opt to delay the deal until President-elect Donald Trump’s administration takes office in January in hopes of a more favorable outlook for the deal”.
CR Howdyshell, CEO of Fulcrum IT Partners company and top Apache and Aruba partner Advisex, said that if the deal is not completed it would cost his company “millions of dollars” in potential networking as a service and AI sales opportunities. Will come up with the Apache-Juniper deal.
“We already had an aggressive sales effort for next year based on the Apache Juniper acquisition,” Howdyshell said. “(HPE CEO) Antonio (Neri) has already announced that HPV is going to be a networking company. We believe that if this deal is not approved we will miss an important opportunity. This was a focused initiative for us for the next year. We have already started work on this.
Howdyshell said he sees the Cape-Juniper deal providing much-needed opportunities for the networking and AI markets. “I’m not sure what the thinking is behind a potential DOJ challenge,” he said.
Houdinishell said he expects Apache and Juniper to receive more favorable regulatory review from the incoming Trump administration.
“I would expect them to wait for the new administration to take office,” he said. “This is what makes sense. Now you have business people deciding at this point what is right for the business.”
Rob Schaeffer, president and chief revenue officer of E360, a top Apache partner based in Concord, Calif., ranked No. 128 on the 2024 CRN SP500, said he also would be disappointed if the Apache-Juniper deal does not receive regulatory approval. The potential networking and AI sales opportunities this will open up for the e360.
“I don’t know any details about why the DOJ would challenge this,” he said. “I don’t see how antitrust it would be, because there is significant competition in the networking sector, not only from Cisco, but also from other networking companies. Although the combination of Apache-Aruba and Juniper is very strong and interesting, I do not know whether it will have market dominance that would present any problems for the government.
Schaefer said he expected the deal to provide significant opportunities for e360, noting that networking is the largest part of the company’s business.
“We would like to see this deal move forward,” he said. “We were planning a significant sales effort around Apache-Aruba-Juniper for the new year.”
In a blog post last February, Neri said that the proposed $14 billion acquisition of Juniper Networks was intended to disrupt the networking “status quo” with a new modern AI-powered networking fabric.
Neri said in a blog post, “We are pursuing this acquisition because we believe that the combination of Apache and Juniper Networks will fundamentally change the networking industry – not by eliminating products from either portfolio – but by “By creating more options in this area.” He Addresses the potential competitive impact of the deal. “The incredible thing about a combination like this is that there are strong offerings on both sides and by bringing them together, we will increase value and flexibility for all of our customers.”
without direct separation rival ciscoNeri, which has maintained a dominant position in the networking industry since the mid-1990s, said the deal is in keeping with the current networking status quo.
“Our objective is to better solve customer challenges by integrating two of the most innovative and competitive organizations in the networking industry and disrupting the status quo,” Neri said. “Our combined business will create a new full-service networking company with a customer-focused approach to product development and a broad portfolio that will deliver value for our customers.”
The database and development platform provider is announcing several initiatives at Microsoft Ignite this week that make it easier for customers and partners to work with MongoDB on the Azure cloud.
MongoDB is expanding the scope of integration between its cloud database development platform and Microsoft Azure, which the company says will make it easier for partners and customers to build real-time data analytics links and develop generative AI applications.
Today in this week’s series of announcements Microsoft Ignite ConferenceMongoDB is integrating MongoDB Atlas cloud databases with Microsoft’s Azure OpenAI services and launching its MongoDB enterprise advanced database management tools on the Azure Marketplace.
MongoDB said the new integrations will provide partners and customers with greater flexibility in data development on Azure – particularly to help meet growing data demand. aye and generic AI applications. [Related: MongoDB CEO Ittycheria: AI Has Reached ‘A Crucible Moment’ In Its Development]
“I think the pace is phenomenal, things are changing daily,” Alan Chhabra, executive vice president of worldwide partners for MongoDB, said in an interview. crn About the rapid growth of AI and GenAI development. He said that experimentation with GenAI, especially within large enterprises, “is through the roof.”
Despite competing with Microsoft and its Azure Cosmos database, MongoDB is constantly expanding its alliance with Microsoft – as well as its own partnerships Amazon Web Services And google cloud – In recent years.
Last year MongoDB extended its multi-year strategic partnership with Microsoft, committing to a number of initiatives including closer collaboration between the two companies’ sales teams and making it easier to migrate database workloads to MongoDB Atlas on Azure. After this, such steps were taken in 2022 which allowed developers to Work with MongoDB Atlas Through the Azure Marketplace and the Azure Portal.
“Microsoft has become our fastest growing partnership,” Chhabra That said, seeing how MongoDB and Microsoft sales reps collaborate to sell MongoDB for Azure, specifically for AI and GenAI development.
At the Ignite event on Tuesday, MongoDB announced that customers building applications powered by recovery-augmented generation (RAG) can now select MongoDB Atlas as a vector store in Microsoft Azure AI Foundry, bringing MongoDB Atlas’s vector capabilities to Microsoft. Can combine with Azure’s generative AI tools and services. and Azure Open AI Service.
The company said this makes it easier for customers to enhance large language models (LLMs) with proprietary data and create unique chatbots, co-pilots, internal applications or customer-facing portals that leverage up-to-date enterprise data. And are based on context.
Chhabra said the new capabilities are designed to help customers develop and deploy GenAI applications. “It is not easy. There is a lot of confusion. It also has a lot of uses, because everyone knows they need to use it. [but] They’re not sure how.
“This integration will make it easy and seamless for customers to deploy RAG applications by leveraging their proprietary data in conjunction with their LLM,” Chhabra said.
in may MongoDB launched MongoDB AI Applications Program (MAAP) which provides an entire technology stack, services, and other resources to help businesses develop and deploy large-scale applications with advanced generative AI capabilities.
Chhabra said MongoDB systems integration and consulting partners will benefit from the new integration “because we’re making it easier for them to deploy General AI pilots and help take them into production for customers.”
Chhabra said that while larger enterprises are doing a lot of AI development and experimentation in-house, SMBs are looking for more fully packaged AI and GenAI solutions.
“I believe there is a big play for ISV application [developers] Those who are building purpose-built GenAI applications in the cloud on Azure, leveraging the MongoDB stack, leveraging our MAAP program,” Chhabra said. “So instead of customers having to build, they can buy GenAI solutions. When large companies like Microsoft work with cutting-edge, growing companies like MongoDB, we make it easier for customers and partners to deploy GenAI. [and] The entire ecosystem benefits.”
In another announcement from Ignite, MongoDB said that users who want to get the most insight from operational data can now do so in real-time with open mirroring in Microsoft Fabric for MongoDB Atlas. According to MongoDB, this connection keeps data in sync between MongoDB Atlas and OneLake in Microsoft Fabric, helping to generate real-time analytics, AI-based predictions, and business intelligence reports.
And the announcement of the launch of MongoDB Enterprise Advanced on the Azure Marketplace for Azure Arc-enabled Kubernetes applications gives customers greater flexibility to build and operate applications in on-premises, hybrid, multi-cloud, and edge Kubernetes environments.
Eliassen Group, a Reading, Mass.-based strategic consulting company that provides business, clinical and IT services, will use the new Microsoft integration to foster innovation and provide greater flexibility to its clients, MongoDB said.
“We have seen the incredible impact MongoDB Atlas has had on our customers’ businesses, and we are equally impressed by the capabilities of Microsoft Azure AI Foundry. “Now that these powerful platforms have been integrated, we are excited to combine the best of both worlds to create AI solutions that our customers will love as much as we do,” said David E., Vice President of Emerging Technology at Eliassen Group. said Vice President Colby Capps. In a statement.
The new extensions to the Microsoft alliance come a little more than a month after MongoDB introduced MongoDB 8.0, a significant update to the company’s core database that offers improved scalability, optimized performance, and enhanced enterprise-grade security .
By adding Nvidia’s HGX H100 and H200 GPUs to the Hitachi iQ AI-ready infrastructure, Hitachi Vantara aims to bring it to a wider range of enterprises and ultimately to mid-range customers looking for a complete technology stack for AI workloads. Wants to take.
Data storage and infrastructure technology developer Hitachi Vantara on Tuesday unveiled a new version of its Hitachi IQ AI solution stack that includes Nvidia’s HDX GPUs.
Hitachi iQ, unveiled earlier this year, brings together Hitachi Vantara’s AI-ready infrastructure technologies with Nvidia’s GPU technologies, said Tanya Loughlin, director of AI and unstructured solutions product marketing at Hitachi Vantara.
Laughlin told CRN that the Hitachi IQ brand includes the AI-ready infrastructure stack and services that the company brings to market. The first was in July with the launch of the Hitachi iQ with Nvidia DGX GPUs and Nvidia BasePod certification, he said.
On Tuesday, Hitachi Vantara is officially launching the Hitachi iQ with Nvidia’s HGX GPUs, Laughlin said.
Currently, with the DGX-version, Hitachi iQ AI systems are integrated in the field. However, HGX versions are integrated by Hitachi Vantara before shipping to channel partners and customers for final configuration, he said.
“What’s exciting about it is that we’re reselling it,” he said. “Nvidia’s DGX can only be sold by DGX-certified partners. With HGX, we are packaging and reselling the entire solution. Our infrastructure, our storage, Nvidia components, the HDX compute layer, as well as Nvidia’s AI enterprise, which is basically a framework for building AI applications. We will be able to package it, and it will go directly to customers from our distribution center.
The DGX was one of the original high-end GPU servers, built by Nvidia with eight H100 or H200 GPUs, and was originally sold by Nvidia to the company’s ODM partners, said Gary Hemminger, senior director of AI solutions product management at Hitachi Vantara. Was sent to.
“The HGX H100 and H200 that we’re selling are basically the SuperMicro version of the DGX,” Hemminger told CRN. “It has the same parts. About 90 percent of the system’s value is still the GPU, but it’s basically the same system. In fact, we tested the BasePod with the DGX. Then once we got the BasePod certification, we removed the DGX and replaced it with the HDX and confirmed that they get basically the same performance. Actually, the HDX is a little higher, but basically the same capacity, same GPU, same network interface, same performance.
The Hitachi iQ integrated system includes the Nvidia HGX H100 system based on the Supermicro SuperServer; Nvidia software including Nvidia Base Command Manager, Nvidia AI Enterprise, and Nvidia GPU Direct Storage; Hitachi Content Software for files with Hitachi Vantara storage nodes; Nvidia networking components; and Hitachi Vantara HA810 G3 servers for management.
The path to market is different for the DGX and HGX versions, Hemminger said.
“Going to market with DGX is basically getting into the channel and working with DGX-authorized resellers to differentiate our storage solution and our other solutions so they can plug them in,” he said. “With HDX, our goal is that we can sell everything.”
Laughlin said this gives it a fully integrated stack solution for resale to partners who are not DGX-certified.
“We are in multiple talks with DGX-certified partners to sell them. …So there are a number of different routes to market that we’re working on with partners,” she said.
Hitachi Vantara expects Hitachi IQ’s primary workloads to be primarily large language model building, fine-tuning and RAG (retrieval-augmented generation), Hemminger said.
“Maybe not too much can be predicted,” he said. “You’ll probably see it more in mid-range to entry products. In terms of industry verticals and use cases, our focus right now is mostly focused on where we are strong, vertical wise, which is banking and financial, healthcare, telecom and manufacturing.
Hemminger said Hitachi Vantara has not seen any supply issues related to recent reports that Nvidia is shifting some orders away from Supermicro.
“In terms of supply, it is not an issue at this point in time,” he said. “In fact, the major issue has been changes in US export laws. It has accepted country list, banned list and restricted list. For restricted list, you basically have to get special permission. ‘forbidden.’ The approved list included about 50 countries from around the world. Now it has come down to 25. So more countries have been put on the banned list, and so now the burden is on us, Nvidia and Supermicro because we have to get exceptions for those banned list countries every time. “There’s really no change in supply, it’s just which countries we can sell to without doing anything.”
The extension of Hitachi IQ is really an evolution of what Hitachi has done, said Dave Cerniglia, president of Consilient Technologies, an Irvine, California-based solutions provider and longtime Hitachi Vantara channel partner.
“The message from Hitachi Vantara has been very consistent,” Cerniglia told CRN. “It’s all about data and what you do with that data. You have these enterprises and medium-sized companies that are collecting massive amounts of data. And what is AI? AI is just another way of how you leverage that data, and how you leverage those workflows to make an organization more profitable.
Cerniglia said Hitachi Vantara is continuously unveiling technologies that are increasingly robust and scalable.
“Hitachi focuses on scalability, performance, which they can deliver to their customers with massive amounts of data,” he said. “So for me, this is the AI story. It’s really, how do you turn a company’s data into intelligence? How can you turn that data into an asset? So I’m excited. I think this is another way for us to be able to offer customers more of what they want.”
Microsoft Copilot Actions, AI agents inside SharePoint and a new Azure AI Foundry experience are among the big reveals.
Microsoft Copilot Actions prompt templates. Artificial intelligence agents inside SharePoint. And a new Azure AI Foundry experience for designing and managing AI apps and agents.
These are some of the biggest new products and updates the Redmond, Wash.-based tech giant is revealing this week during its Ignite 2024 event.
Ignite runs through Friday, with programming in person in Chicago and online. Microsoft had 200,000-plus people register for the event and expected 14,000-plus in-person attendees. [RELATED: Microsoft CEO: AI Provides ‘On-Ramp’ To Azure Data Services, Copilot Continues To Surge]
Microsoft Ignite 2024
In total, Microsoft revealed 80 new products and features across its product portfolio, a number of those focused on the emerging AI era.
About 70 percent of the Fortune 500 use Microsoft’s Copilot AI tool, according to the vendor. For every $1 invested, companies see a return of $3.70, with some of the highest returns reaching $10.
Microsoft also said that about 600,000 organizations have used Copilot in Power Platform and other AI-powered capabilities, up fourfold year over year.
Accenture, No. 1 on CRN’s 2024 Solution Provider 500, is in the process of rolling out Microsoft copilots and agents to 100,000 employees, according to Microsoft. It has a commitment to deploy 200,000 more.
AI looks to feature prominently for the vendor’s 400,000-member partner ecosystem in 2025. In Microsoft’s latest quarterly earnings call, Chairman and CEO Satya Nadella said that the company’s AI businesses should “surpass an annual revenue run rate of $10 billion next quarter, which will make it the fastest business in our history to reach this milestone.”
“When I talk about Copilot, Copilot Studio, agents, it’s really as much about a new way to work,” Nadella said on the call. “I describe it as what happened throughout the ’90s with PC penetration. After all, if you take a business process like forecasting, what was it like pre-email and Excel and post-email and Excel. That’s the type of change that you see with Copilot.”
Here are the biggest news items coming out of Ignite 2024 in AI and with Microsoft Copilot.
Copilot Actions, Copilot In Teams
Microsoft moved its Copilot Actions customizable prompt templates into private preview, the vendor announced during Ignite.
Users can leverage Actions to receive status updates and agenda items from colleagues and employees, compile weekly reports, schedule daily emails summarizing other emails and Microsoft Teams chats and more.
Actions users can automate templates on demand or with an event trigger. Actions can deliver information in an email, Word document and other specified formats, according to the vendor.
Microsoft will push new Copilot in Teams abilities into preview in early 2025, including a way for users to analyze screen-shared content in the collaboration platform and summarize file content in mobile and on desktop.
Screen-shared content will be available for Copilot summarizations, insight and for use when drafting new content, according to the vendor.
The new file summaries ability will apply to one-to-one and group chats in Teams. This feature will also follow file security policies so that users with unauthorized access don’t receive summaries.
New Microsoft 365 Agents
Microsoft introduced a host of AI agents during Ignite, with one such offering, Agents in SharePoint, entering general availability.
These agents are grounded on users’ SharePoint sites, files and folders to improve finding answers from that content, according to Microsoft. Every SharePoint site will include an agent tailored to its content. Users can also make their own agents scoped to select SharePoint folders, sites and files.
Users can give agents a name and behaviors and answer questions in real time, according to Microsoft. The SharePoint agents will follow existing user permissions and sensitivity labels.
Employee self-service agents have entered private preview. These agents in Microsoft 365 Copilot Business Chat (BizChat) can answer common policy-related questions and do some human resources tasks such as understanding a particular employee benefit, retrieving payroll information and starting a leave of absence.
These agents can also handle some IT tasks, including a request for a new laptop and assisting with a Microsoft 365 product. Users can customize these agents in Copilot Studio.
In preview are facilitator agents and project manager agents. Facilitator agents take notes in Teams meetings in real time and summarize information from Teams chats as conversations happen, according to Microsoft.
Project manager agents in Planner can create new plans and use preconfigured templates. The agent will oversee entire projects, assigning tasks, tracking progress and sending reminders and notifications. It can even complete tasks and create content.
Interpreter agents are expected to enter preview early next year. These AI agents can interpret up to nine languages in real time in Teams meetings. Meeting members can have the agent simulate their personal voice.
Azure AI Foundry
Microsoft introduced Ignite watchers to its Azure AI Foundry experience for designing, customizing and managing AI apps and agents.
Now available in preview are the Azure AI Foundry portal—the former Azure AI Studio—and the Foundry SDK.
The portal is the visual user interface for finding AI models, services and tools. Users can see subscription information in a single dashboard. IT administrators, operations personnel and those focused on compliance can manage AI apps at scale in the portal.
The SDK has a unified toolchain, 25 prebuilt templates and a coding experience users can access from GitHub, Visual Studio, Copilot Studio and other tools, according to Microsoft. Users can leverage the SDK for integrating Azure AI into their applications.
Coming soon to preview is the Azure AI Foundry Agent Service. This feature should allow developers to orchestrate, deploy and scale agents for automating business processes, according to Microsoft. Agent Service will allow for bring-your-own-storage and private networking for data privacy and compliance.
Foundry portal and SDK will gain a preview in December for Azure AI risk and safety evaluations for image content. These capabilities should help users assess the frequency and severity of harmful content in AI-generated outputs.
These evaluations will allow Azure AI to go beyond text-based evaluations and assess text inputs yielding image outputs, image inputs yielding text outputs and images with text—such as a meme—as inputs yielding text or images.
Users can leverage these evaluations for modifying multimodal content filters with Azure AI Content Safety and adjusting data sources for grounding. Users can also update system messages before deploying apps to production.
Copilot Control System
A Copilot Control System from Microsoft aims to help IT manage copilots and agents with data access, governance, security controls, measurement reports, business value tracking tools and adoption tracking tools.
One of the features of the Control System is Copilot in Microsoft 365 Administration Centers (MAC), now in private preview and set for general availability early next year, according to the tech giant.
Copilot in MAC leverages AI to do routine tasks by IT administrators and suggest ways to get more value out of M365 subscriptions. It will be available in the admin centers for M365, Teams and SharePoint and provide summaries of trends across an administrator’s assigned areas. The copilot can also summarize message center posts across all apps and services and meeting reports. It can troubleshoot call quality and other user issues with natural language.
Another feature in the Control System is Copilot Analytics. General availability capabilities within Copilot Analytics include a dashboard that covers Copilot readiness, adoption and learning and M365 admin center reporting to surface adoption and usage trends.
In early 2025, Copilot Analytics will include Viva Insights for no additional charge. Insights is a measurement toolset for productivity and business outcomes.
Copilot Studio Updates
Copilot Studio gained a multitude of previews, including ones for autonomous agentic capabilities, an agent library and an agent SDK.
The autonomous agents can take actions on a user’s behalf without prompting each time. These agents act in the background when recording an uploaded file, receiving an email and responding to events, according to Microsoft.
The autonomous agents plan, learn from processes, adapt to new conditions and make decisions.
The library has templates for leave management, sales order, deal acceleration and other common agent scenarios.
The SDK will allow developers to build multi-channel agents that leverage Azure AI, Semantic Kernel and Copilot Studio services and are deployable across Teams, Copilot, web and third-party messaging platforms.
More previews include image uploads for agents to analyze and advanced knowledge tuning to match specific instructions to unanswered questions,
Copilot Studio integrations with Azure AI Foundry will give Studio access to 1,800-plus models in the Azure catalog. A bring-your-own model capability is in private preview, as is the ability to embed voice-enabled agents in Studio and voice experiences in applications and websites.
A new pay-as-you-go consumptive billing option for Copilot Studio messages through existing Azure subscriptions will become available for users on Dec. 1.
Copilot Pages Upgrades
Microsoft has plans to make new features in its Copilot Pages content creation canvas generally available in early 2025, including rich artifacts and multi-page support.
The rich artifacts support means Pages will gain the ability to support code, interactive charts, tables, diagrams and math from enterprise data and web data, according to Microsoft.
Multi-page support will give users ways to add content from multiple chat sessions and from Pages made in previous Copilot conversations.
Other features entering general availability include grounding Copilot chat prompts on Page content as the page is updated for better answer relevancy and Pages viewing, editing and sharing on mobile.
Copilot in PowerPoint, Outlook, Excel
Copilot in PowerPoint should have some new features in 2025, including a narrative builder based on a file and organization image support.
Narrative builder based on referenced file will enter general availability in January, allowing for better first drafts of slides, according to Microsoft. Copilot will add branded designs, speaker notes, transitions and animations to the presentation.
In the first quarter, a capability for bringing images from SharePoint Organization Asset Library, Templafy and other asset libraries into Copilot in PowerPoint will enter general availability.
Microsoft will also increase access to presentation translations, with all Copilot in PowerPoint web users getting the ability to translate presentations into one of 40 languages in December. Desktop and Mac users will gain the capability in January.
By the end of the month, Copilot in Outlook will gain the ability to schedule focus time and one-on-one meetings based on a user prompt. Copilot will find the best times for everyone and draft an agenda based on the prompt’s details of the meeting
Before year’s end, Copilot in Excel will add a new start experience wherein Copilot suggests the type of spreadsheets users should make based on what they want. Copilot can also refine the template with headers, formulas and visuals.
Microsoft Places Enters General Availability
Microsoft revealed that its Places AI-powered workplace application has entered general availability, bringing location data to Teams and Outlook to help with in-office day planning.
Copilot can recommend when users should go into the office based on in-person meetings, guidance and when common collaborators will be in, according to Microsoft. It can manage room bookings for one-time or recurring meetings and help book rooms and desks based on images of the office and floor plans.
Administrators can leverage Places for an analysis of intended and actual occupancy for spaces.
Azure AI Content Understanding
Now in preview is the service Azure AI Content Understanding, which aims to assist developers in building and deploying multimodal applications.
The service uses GenAI to get information from documents, images, videos, audio and other unstructured data and put that information into customizable structured outputs, according to the tech giant.
Content Understanding has prebuilt templates and ways to customize outputs for call center analytics, marketing automation, content search and other use cases. The service also has prebuilt schemas for users to say what they want extracted from data, such as captions, transcripts, summaries, thumbnails and highlights.
Microsoft Fabric News
Microsoft’s Fabric data integration platform gained a host of new previews, including ones for Fabric Databases, SQL database in Fabric and open mirroring.
Fabric Databases aims to unite transactional and analytical workloads to improve app development optimized for AI databases, according to Microsoft. SQL database in Fabric is the first database engine in Fabric, with plans for Azure Cosmos DB and Azure Database for PostgreSQL to join.
SQL database in Fabric will allow for faster app building with data automatically replicated in Fabric’s multi-cloud data lake OneLake and native vector search capabilities allowing for retrieval augmented generation (RAG).
This capability will also allow for auto-optimizing databases, auto-scaling them and translating natural language queries into SQL with inline code compilation next to code fixes and explanations.
The goal of open mirroring, meanwhile, is to allow any app or data provider to bring the data estate into OneLake within Fabric so they can write change data into a mirrored database in Fabric.
A new OneLake catalog is also now generally available for exploring, managing and governing the Fabric data estate across notebooks, lakehouses, warehouses, machine learning models and more.
Windows 365 Link, security exposure management and a new post-CrowdStrike faulty update initiative are among the big announcements.
Microsoft’s Windows 365 links devices. Security exposure management is becoming generally available. And a new initiative to make improvements after a faulty CrowdStrike update in July.
These are some of the biggest device and security news coming from Microsoft’s Ignite 2024 event.
Ignite runs through Friday, with in-person and online programming in Chicago. Microsoft had registered more than 200,000 people for the event and was expecting more than 14,000 to attend in person. [RELATED: Microsoft CEO: AI Provides ‘On-Ramp’ To Azure Data Services, Copilot Continues To Surge]
Microsoft Ignite 2024
Redmond, Wash. The based tech giant unveiled 80 new products and features in its product portfolio.
According to Microsoft, Windows 11 has seen a three-fold reduction in firmware attacks and almost three times fewer credential theft incidents compared to Windows 10.
During Ignite, Microsoft said that the controversial recall feature would be disabled by default for Copilot+ PCs. IT will enable this feature through new policies before employees opt in.
Microsoft Chairman and CEO Satya Nadella shared his enthusiasm for the vendor’s devices and security portfolio during the vendor’s recent quarterly earnings call.
“It’s about hybrid AI where the rebirth of the PC as the edge of AI is going to be one of the most exciting things for developers,” Nadella said on Microsoft’s Copilot+ PC.
Nadella said customers have used Defender to find and secure more than 750,000 GenAI app instances. They have used Parview to audit over 1 billion Copilot interactions to ensure they meet compliance obligations.
Here’s everything you need to know in security and device news from Ignite 2024.
windows 365 link
In device news, Microsoft has previewed Windows 365 Link devices built for its Windows 365 cloud-based virtual machine service, with Link becoming generally available in April with a manufacturer’s suggested retail price of $349.
According to Microsoft, interested organizations in the US, UK, New Zealand, Japan, Germany, Canada, and Australia can apply for the preview program.
According to Microsoft, users can place Link on their desk, boot it up in seconds and perform local processing for Teams meetings, Webex by Cisco, and other high-fidelity experiences.
The Link supports dual 4K monitors, four USB ports, an audio port, an Ethernet port, Wi-Fi 6E, and Bluetooth 5.3.
The device has no local data, apps, or non-administrator users. Corporate data is safe in the Microsoft cloud. Security default policies are on by default. Users cannot turn off security features.
Users can leverage Microsoft Entra ID, Microsoft Authenticator app, or USB security key for passwordless login.
Microsoft Intune users can manage devices linked with other PCs. Links are configured in minutes and updated automatically when turned on for the first time. They are factory-reset in minutes for reusability.
better windows search
Starting next year, Windows Insider Program members with Snapdragon-powered CoPilot+ PCs will have the ability to take advantage of their Neural Processing Units (NPUs) for better search with File Explorer, Windows Search, and Settings.
Users can find documents, photos, and other files without having to search for file names or exact file contents. They can describe content with synonyms, even text that may appear in an image. This feature will work even without internet connection.
Enhanced search will be coming to Windows 365 cloud PCs in the spring.
Microsoft Security Exposure Management goes to general availability
Microsoft has made its security exposure management experience generally available to practitioners assessing cyber threats.
Exposure management integrates disparate data silos for better attack surface visibility, assessing attack paths to assets and across devices, identities, apps, data, on-premises, hybrid and multi-cloud infrastructures. Provides context-based recommendations to improve security posture.
According to Microsoft, the tool has attack path analysis capabilities with modeling and blast radius estimation, as well as integrated insights that bring in currency data from other vendors.
Microsoft Purview Update
Microsoft updated its Purview data governance and compliance platform to include the general availability of Customer Lockbox, which provides data protection for Windows 365 with users in the approval workflow process, and Data Security Posture Management, as well as AI Provides DSPM for.
According to Microsoft, DSPM for AI should help IT administrators and data stewards find risks and prevent data oversharing, data leakage, and other incidents. The tool works on Copilot, custom apps built on Copilot Studio, and third-party apps like ChatGPT Enterprise by Microsoft-backed OpenAI.
The new Purview preview includes data loss prevention (DLP) for Microsoft 365 Copilot – aimed at ensuring that the content of sensitive documents is not abstracted by AI – and Azure Microsoft Rights Management-defined sensitivity labels for administrators Ability to extend Office files and PDFs to SharePoint document libraries comfortably.
By the end of the year, Purview will have a preview of embedded Security CoPilot capabilities, including DSPMs with AI-powered data estate risk insights in natural language and suggested prompts to guide users through investigations.
Other Security CoPilot capabilities entering preview are DLP policy understanding, eDiscovery case summaries, and a CoPilot-powered knowledge center.
Features after the CrowdStrike incident
During Ignite 2024, Microsoft introduced its Windows Resiliency Initiative, which is based on learnings from the global outage caused by CrowdStrike. faulty update In July.
According to Microsoft, the initiative also focuses on allowing more apps and users to run without administrator privileges, stronger controls for which apps and drivers can run, and better identity protection.
Quick Machine Recovery is a feature that will come to the Windows Insider Program in early 2025 thanks to this initiative. With this feature, IT administrators can target Windows Update fixes to PCs, even when the machines cannot boot and do not have physical access to the PC.
There are ways to build security products outside of kernel mode, coming as a private preview to the security product ecosystem in July. According to Microsoft, antivirus and other security products will have the ability to run in user mode, just like apps. This will provide better resiliency to Windows in the event of a crash or error.
windows security updates
Microsoft said it is addressing long-standing complaints about Windows security — over-privileged users and applications, unverified apps and drivers, and insecure credentials and authentication.
The preview has Administrator Security, a tool that has standard user permissions security by default. If a system change requires administrator rights, users are asked to authorize the change using Windows Hello. Windows creates a temporary separate administrator token that is destroyed after the task is completed.
According to Microsoft, the new AI capabilities for Smart App Control and App Control for Business attempt to make the tool easier to deploy. A signed and reputable policy template should allow millions of verified apps to run, regardless of deployment location.
And the Personal Data Encryption (PDE) layer now generally available for Windows Enterprise should add more security to personal user files on laptops that are now readable only with Windows Hello sign in. PDE also integrates with OneDrive and SharePoint and is manageable with Intune.
Windows CoPilot Runtime, Windows Subsystem for Linux
Microsoft has added new AI APIs and improved frameworks and tools to the Windows Copilot Runtime to help developers scale AI across devices.
APIs for image description, image super resolution, object erasure, and optical character recognition are coming in January.
According to Microsoft, Windows Subsystem for Linux (WSL) has added integration with Intune, which is now generally available, and Entra ID, which is now in private preview.
In the coming months, Microsoft will preview a new distribution architecture for WSL to better manage and optimize it with enterprise security policies.
A new preview of Hotpack for Windows gives users a way to download updates in the background and have the installation take effect without restarting the device.
The preview coming before 2026 for Windows Autopatch AI integration with Copilot in Intune means IT administrators can only access data within their permissions and Windows users can prepare for feature updates, ready devices, and other uses. Can get payload details between cases.
A now generally available configuration refresh feature is available to enforce mobile device management (MDM) security policies by returning a PC to a preferred configuration, avoiding configuration drift when users change the system registry. Refresh also works offline with device self-management locally.
Mixed Reality, Windows in the Modern Environment
Microsoft has a preview coming in December for Windows 11 in Meta Quest headsets, which allows users to take advantage of Windows for virtual meetings and high-resolution monitors.
Windows 11 Mixed Reality Access will debut with the Quest 3 and Quest 3S headsets.
The preview available now allows a shared mode for provisioning Windows 365 Frontline. This mode is for users who need brief access to ad-hoc tasks in a non-personalized Windows desktop environment. User data is deleted upon signoff.
Another preview is for Windows Apps Mobile Application Management (MAM) support for iOS and Android to define device security criteria and customized access.
Azure Chips, Infrastructure
At Ignite, Microsoft introduced its Azure Integrated Hardware Security Module (HSM) in-house cloud security chip.
Next year, Microsoft will begin installing HSM in every new server in its data centers for confidential and general-purpose workloads.
The vendor also showcased its first in-house data processing unit silicon, the Azure Boost DPU. According to Microsoft, the purpose of a DPU is to work on storage, networking, acceleration, and more. Future DPU-equipped servers should run cloud storage workloads at three times less power and four times the performance of existing servers.
A liquid cooling heat exchanger unit rack by Microsoft should support large-scale AI systems on Azure, including Microsoft’s Azure Maia. Microsoft can reinstall the unit in Azure data centers.
Microsoft and Meta have collaborated on a differentiated power rack design with 400-volt DC power for 35 percent more AI accelerators per server rack. The vendors are open-sourcing the specifications through the Open Compute Project.
Microsoft launches preview of Nvidia Blackwell GB200-powered Azure AI system. Azure ND GB200 V6 is the new AI-optimized virtual machine series powered by Nvidia GB200 superchips.
More infrastructure news
Microsoft has made Azure Local cloud-controlled, hybrid infrastructure platform and Windows Server 2025 generally available.
Local extends Azure services across distributed locations for mission-critical workloads and cloud-native applications and AI. Runs containers, servers, and Azure Virtual Desktop (AVD) on Microsoft-accredited hardware from Hewlett Packard Enterprise, Lenovo, Dell Technologies, and others for local custom latency, near-real-time data processing, and compliance.
Windows Server 2025 has a preview of hot-patching subscriptions for easier upgrades, improved security, and update installation with fewer restarts.
Microsoft also moved SQL Server 2025 to private preview. According to Microsoft, this database platform should simplify AI app development and RAG patterns.
Paolo Dal Sin of Accenture Security told CRN that services are ready to use the technologies to secure GenAI use, as well as protect against AI-powered attacks and security capabilities.
A newly announced set of Accenture cybersecurity services aims to secure customer use of GenAI, as well as protect against GenAI-powered attacks and enhance existing security capabilities, the head of Accenture Security told CRN.
IT consulting giant Accenture, number 1 on CRN solution provider 500 The new services were launched on Tuesday amid a shift in the market towards adopting GenAI for business growth and its productivity enhancing capabilities, for 2024, according to Paolo Dal Sin, global head of Accenture Security. [Related: Accenture To Train 30,000 Staff On Nvidia AI Tech In Blockbuster Deal]
Ultimately, Accenture sees GenAI as an opportunity “to develop new services and re-invent existing services” within the cybersecurity sector, Dal Sin said.
The launch includes Accenture’s announcement of several new Cyber Future Centers around the world focused on GenAI and security, including a new US center in Washington, D.C. It said each center will employ more than 100 experts.
To safeguard the adoption of GenAI, Accenture is unveiling its Secure AI Solutions offering, focused on enabling organizations to mitigate the risks of data exposure and address vulnerabilities in AI models so organizations can adopt new technologies. To avail benefits safely.
For example, according to Dail Sin, Accenture will now offer a GenAI security diagnostic tool that it previously piloted internally for its own use.
The tool’s capabilities include identifying vulnerable data lakes and unauthenticated fundamental models, he said, while also providing the ability to shield fundamental models from rapid injection attacks using an approach similar to an “AI firewall.”
To protect against AI-powered attacks, Accenture’s new services will include protection capabilities against deepfakes, which are increasingly being used for phishing and social engineering.
Key components will include a service co-engineered with deepfake detection startup Reality Defender, which Accenture has backed as an investor. “I believe there is nothing available yet on the market” that is comparable, Dal Sin said.
Meanwhile, Accenture is also enhancing its existing cybersecurity services using GenAI, including its managed detection and response (MDR) offering.
According to Del Sin, the company’s security teams are taking advantage of AI assistants that can better collect and analyze threat intelligence, ultimately significantly improving risk correlation.
Using these capabilities, he said, “there has been a material improvement in the effectiveness” of providing MDR to customers.
Another example is on identity security, where Accenture has developed an agent that can dramatically improve the speed of user provisioning and access control, Del Sin said.
“It’s something we’ve never seen before, because it was a very people-centric service,” he said.