Mentionsy

evoilutioncast
evoilutioncast
06.08.2025 09:00

evoilutioncast 45: How to build a solid foundation for AI on VCF?

In this episode of the IT podcast - evoilutioncast, Maciej Lelusz speaks with Frank Denneman - a very AI person in VMware by  Broadcom. Frank plays a key role in the VCF division, where he shapes the roadmap for the Private AI Foundation in NVIDIA and heavily influences the division’s overall AI strategy.

✔️Fancy to know whether the VCF is an infrastructure for AI?

✔️Does that put an AI construct in the DC?

✔️The truth is that with AI, nothing is easy, but you can make it easier.

✔️As well as there are things to be approved by humans and things to be made by AI.

✔️Listen to the conversation to find out new trends in AI infrastructure as RAG or #vector database and many more.

🤝 Episode's Partner: VMware by Broadcom

𝓛𝓲𝓼𝓽 𝓸𝓯 𝓬𝓸𝓷𝓽𝓮𝓷𝓽:

AI on VCF (VMware Cloud Foundation) platform: what's all about?

vcf9 as a local hypervisor?

SaaS solution as a starter on the cloudfoundation platform

Platform, both for engineers and developers - what is this VCF platform?

cost spending tracking on private cloud platform

Retrieval Augmented Generation (RAG) is the most common use case

The most trending solutions on the market: summarizing based on augmented AI in healthcare

digestion pipeline of data by building vector database

Similarity search and embedding model: how does it work?

What gives the VCF platform to the organization as an open infrastructure model?

Next steps on VCF? Wider integration and an easier way of consuming a platform - giving the best way of consuming the resources that you have

🔔 Subskrybuj: https://bit.ly/sub_evoilutioncast

Szukaj w treści odcinka

Znaleziono 48 wyników dla "AI"

Wiesz co, o czym chcę rozmawiać z tobą jest oczywiście AI, bo nikt nie mówi o czymkolwiek innym, ale... Zacznijmy od tego, wiesz, mniej lub mniej szaleństwa czasami, wiesz, historii o tym, jak to zmieni nasz świat, wiesz, bla, bla, bla.

And I think that that's the way how we should talk about technology right now, especially AI, because if you see this magical stuff that it can do, basically, you stop to thinking about real use cases, because it's so fancy with the super easy access, you know, all these pictures with Gibi, you know.

Mamy VMware AI i NVIDIA, prawda?

However, when we started to develop this platform, so it's built on top of the VCF platform, we started to think, okay, if we look at the AI ecosystem, the broad landscape, it's very wide, it's also very deep, and it's changing

Nie idźcie na dół i próbujcie malować wszystko AI.

There is not enough power and AI resources for every customer in this world to do it at scale.

I poza tym masz platformę AI, prawda?

Tak, ale po raz pierwszy przeszliśmy do VMware, czy przeszliśmy do container runtime, ponieważ powiedzieliśmy, że powinniśmy być nieco opiniowani, aby dać łatwy metodę, aby zacząć z jakichkolwiek swoich usług AI.

We said no.

In essence, what we do with Private AI Foundation is we focus on

Wtedy, jeśli nauczysz się i rozumiesz, dla czego potrzebujesz AI, a nie tylko dla stworzenia zdjęć Studio GiBi, ale jakichś realistycznych scenariuszów, wtedy możesz odpocząć od tamtej strony, prawda?

At the end of the build your own spectrum, the DLVM and the AI Kubernetes clusters is we give you enough resources that's aligned with the technology stack.

So to go a little bit more into detail, if you build a virtual machine or a Kubernetes environment with a container runtime,

Teraz weźmy zakres użycia tego DLVM, czy AI Cage Cluster.

That means that you have a large language model that's already pre-trained with a lot of knowledge, with a lot of data.

Jeśli zainstalujesz model, teraz ostatnie, co chcesz zrobić, jest po prostu od razu włożyć go w produkcję.

So with that DLVM, what we do is we provide you with the ability just to spin that up using automation, because we already have these templates available.

Możesz wywołać ten kluster AI-Kage i w zasadzie instalować wszystkie frameworky służące i wszystko to, co mamy w Fundacji Privat AI.

Do we have any option in VCF plus private AI with NVIDIA?

So fine-tuning is basically a shorter training period with fewer amount of data

Now, what some of these organizations have done is they said, let's grab all of that data.

The reality is that instead of thinking about how can I expose my AI platform to my own customers, now I'm going to give it to my own personnel.

And so what happens is most of these conversations, the surgeon or the specialist, they are going through a particular questionnaire.

With a GenAI system they can basically say, okay, give me a summary why the system flagged this.

GenAI won't replace jobs or the majority of jobs won't be replaced by GenAI, but you will be replaced by somebody who's using GenAI.

And to try to automate them, because some of them, like filling up the blanks, right, or looking into the connections between the events, can be at least summarized by AI, right?

It's not about making decision and put AI on entire process, but just the argument.

Wielu specjalistów jest w stanie zainteresować się tym, co jest ważne, tak jak lekarz.

They will give everything that AI will fill the tickets after that with the standard reply or whatever, you know, or I'll tell them like, hey, this is the pattern, the guy is doing it every three weeks, you know, just click and that, that, you know, and it will solve the problem because it's the last five years, the same situation.

I'm really interested about more of that, but as well I'm very interested, like, if you tune this model, right, then we understand, basically replace the brain, right, we replace the model with the new one.

Yeah, with AI nothing is really easy, but you can make it easier, right?

You can identify certain steps in the ingestion pipeline.

To pipeline jest coś, co można budować z tego, co nazywa się systemem data, indeksyzacji i utrzymania systemu VMI Pivot AI Foundation.

Those are the elements what we currently offer within private AI foundation with NVIDIA.

Dobra, więc cała platforma pozwala wam od, powiedzmy, ingestowania danych, pracy na danych, ingestowania danych do modelu, które wybieracie z libraryjnego bezpieczeństwa, w której macie airbag na dół.

Są organizacjami, które pracują z AI od początku.

Mogą zwiększyć możliwość prywatnej AI z NVIDIA, z Broadcom.

To nie jest nasz zakres używania, ale tym, na czym skupiamy się, jest strata zainteresowania.

You are focusing VMware in Broadcom on the infrastructure for AI.

It's more like you enable the platform and maybe in the future some marketplace to buy fast kind of solution on AI.

It means this model GUI and stuff like that to run on private AI with NVIDIA from Broadcom plus Tanzu platform, right?

They deliver that in VMs or containers and they use packed models with some tuning, right?

And then if you introduce the AI solution on top of that, this is really something that it will change their life.

Because you have to figure out how is it used, how do we scale, what are the patterns of the user itself, is it like a linear growth or is it a hockey stick growth, what happens with maintenance, all of this stuff.

Yeah, so there are private AI services like the model runtime, the model gallery, data analytics and retrieval, and the agent builder.

Tak, tak, bo jak powiedziałem na początku, w tym świecie nie mamy wystarczająco wystarczających energii i zasobów, aby stworzyć AI na poziomie każdej firmy.

I really appreciate what you're doing there because you're bringing the AI construct in general to the data center when for many companies it belongs from the very beginning.

Because like you said, not always sending your data somewhere that it's not entirely under your control make you feel very comfortable.