EDGE AI POD

Transforming Human-Computer Interaction with OpenVINO

EDGE AI FOUNDATION

The gap between science fiction and reality is closing rapidly. Remember when talking to computers was just a fantasy in movies? Raymond Lo's presentation on building chatbots with OpenVINO reveals how Intel is transforming ordinary PCs into extraordinary AI companions.

Imagine generating a photorealistic teddy bear image in just eight seconds on your laptop's integrated GPU. Or having a natural conversation with a locally-running chatbot that doesn't need cloud connectivity. These scenarios aren't futuristic dreams – they're happening right now thanks to breakthroughs in optimizing AI models for consumer hardware.

The key breakthrough isn't just raw computational power but intelligent optimization. When Raymond's team first attempted to run large language models locally, they didn't face computational bottlenecks – they hit memory walls. Models simply wouldn't fit in available RAM. Through sophisticated compression techniques like quantization, they've reduced memory requirements by 75% while maintaining remarkable accuracy. The Neural Network Compression Framework (NNCF) now allows developers to experiment with different compression techniques to find the perfect balance between size and performance.

What makes this particularly exciting is the deep integration with Windows and other platforms. Microsoft's AI Foundry now incorporates OpenVINO technology, meaning when you purchase a new PC, it comes ready to deliver optimized AI experiences out of the box. This represents a fundamental shift in how we think about computing – from tools we command with keyboards and mice to companions we converse with naturally.

For developers, OpenVINO offers a treasure trove of resources – hundreds of notebooks with examples ranging from computer vision to generative AI. This dramatically accelerates development cycles, turning what used to take months into weeks. As Raymond revealed, even complex demos can be created in just two weeks using these tools.

Ready to transform your PC into an AI powerhouse? Explore OpenVINO today and join the revolution in human-computer interaction. Your next conversation partner might be sitting on your desk already.

Send us a text

Support the show

Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

Speaker 1:

So welcome to Raymond Lowe, Intel. He's going to speak about building your chatbots with OpenVINO and AIPC. Raymond, thank you Welcome. Thanks for being with us today, and you have 25 minutes plus five minutes Q&A and the floor is yours. Thanks so much.

Speaker 2:

Perfect. Thank you again. I will have a lot of slides, so bear with me with the slides today. My name is Raymond, I'm from Intel and I've been with Intel for the last five years almost five years so if you're interested in what we're doing, follow me or our team on LinkedIn. You'll see a lot of news about what we're doing at Intel. So oftentimes you'll see a post from me before there's an announcement somehow. So, for example, I work on OpenVINO a lot, so a lot of pre-release materials. Sometimes I test it out and you can see it, but to no further ado.

Speaker 2:

Let's talk about chatbot. People call it agentic sometimes. For me it's really about how can we talk to a computer, and today we'll talk about that, because having a machine talking to you was not easy, and when it does, it's very powerful. So let's think about journey. As yourself, I've done a lot of different programming exercise since I was young, so but one thing really intrigued me today is we can really have a good interactive experience with computers. I remember the first time when I get a computer. You get a mouse and keyboard, you click on it, type some document. It feels good, but what if we can have a proper conversation and what that brings us. An experience like going to the hospital, for example, how many times you walked in. You're trying to figure out, okay, who should I talk to, Where's the nurse right? And oftentimes you go in there the nurse may ask you, uh, why are you here? And then of course we're sick, of course we are not feeling comfortable at that time. I think if we can reduce some of these wait time, I think there's a lot of huge benefit from building such a chatbot, or assistance we call it sometimes and my brother's the doctor right, we saw a lot in inefficiency in the system, like especially for nook ticking and things like that. So, as we see today, this is becoming a reality. I see, actually, when I see the doctor last week or a couple of weeks ago I walk into the office, I saw the person already turn on the cell phone on the edge, recording our conversation, transcribing and making a better experience when things are needed to pay attention to.

Speaker 2:

I think to us at Intel, this is a great opportunity because your PC, while sitting in front of you, can be the best companion, and why I think so is they're not just talking right, the chatbot. Of course you can do large language model. When I first started I started Lama to was like almost ancient time but it was not answering great questions. Sometimes it was like off track elucidating. But today, if you look at what Microsoft released for example 5.3.5, they can answer better question, even with images. So imagine now you put a question about like what's happening? To my eye they actually can do better and we saw just last week I was at Microsoft Build. We saw scientists are using these tools for discovery and more and more. And that's why I think about chatbot is just really like how can we have a better way to communicate with computer in a deeper way? Right, Think about even doing research and thought processes.

Speaker 2:

And for consumer I don't know if anyone have babies here. I do have a baby and it's one year old. And the number one question I get asked by my wife a lot is like what's my baby doing? And imagine we can have smart camera that you can set up without coding. All you need is put like a large language model behind it, like with a visual language model, and then you can have questions like oh, my baby was trying to play with the computer, play with the keyboard, and that's something actually we're seeing on the edge. It's like, hey, we can actually make extremely smart computer without a lot of computation power things. It may not be very accurate, but we're just at the beginning, right. And then how much more we can get within a year, that's something worth thinking about.

Speaker 2:

And one thing I think a lot of us like when we build a lot of systems on the edge is you never have that big enough model. You always have a small model that actually can fit into the memory. You may have like maybe more limited reasoning. So we also see the next generation is more like agentic with Bragg on top. So why that's important is we test a lot of these. For example, in this case with Lama free A billion. It works extremely well for, let's say, simple use cases and when you have rack which allow you to, uh, augmenting information to the chatbot. Let's example, ask what's AIPC here. It will not know it if I didn't give you the document right. So it's important in the edge use case. Maybe sometimes you want to provide extra information. So then, um, it actually be contextual aware.

Speaker 2:

Think about you in the restaurant uh, you want to build something that, uh, with your burger menu, and we can do that now. We can do it extremely well with rack and also some agentic checkbox, to know to the very final piece. Um, when we think about computer intel, we've been supporting a lot of different models. And then again I talked about OpenVINO. I haven't introduced it properly, but what we notice is we accelerate some of the work so well that only a couple of years ago or a year or two ago, these are not even possible locally. So what you're seeing on this computer right now, which I capture on my own machine, I'm running one of the image generation on the local GPU, the iGPU. I can generate this beautiful teddy bear in about eight seconds. Eight seconds like, finally, like a beautiful teddy bear. That's faster than any human drawing ever and the quality is as good as some of the photography I took myself. So that's very intriguing. And the same source code. If you upgrade your GPU. I don't you ever follow the Intel Arc tried, I am. I'm the fan boy. That thing is a beast. It has 16 gig of RAM. It actually like a couple years, years ago when it released actually, um, we, you were still. We start building AI examples on it and then we realized they're super powerful for the price point we're giving. So, um, that's a great example how we see um programming like a chatbot or creating um very interactive material on the edge, and the GPU you have today on your laptop was like insufficient, and in this case, have today on your laptop will certainly be sufficient, and in this case we can finish this in two seconds, as you can see, Very fast.

Speaker 2:

So this brings to the very important question is what is happening at Intel and what happened to some of the work we have done? This is very new, by the way. I just released this. Last week. We talked about Microsoft Build. We are now collaborating with many, many, many great customers and partners, and then one thing we collaborate on is a lot of software component that you may not have heard about today until maybe now is Microsoft released something called the AI Foundry or we have the Windows ML. What we have done is actually we have put some of our technology, for example, openpino quantization here you can see it left and then also we profile runtime to the Microsoft ecosystem.

Speaker 2:

So later on, when you buy your PC from, let's say, best Buy or some other, maybe Costco. It will preload OpenPino for you and when you download your model, you will get the best performance out from it. I think this is a life changing for me. When I started Intel four years, five years ago, we joke around it, right. Why don't we have this pre-installed at Windows? Things, not a joke anymore.

Speaker 2:

So it's happening as we move forward with, let's say, the new way of thinking about building software on PC, especially on the AI front. The model will be pre-compressed for you and then we'll have a runtime that's always ready so you get the best performance out from your silicon. So and this is something I spent two to three weeks to get from our engineers so if you pretty wonder how it works under the hood, I spent two to three weeks to get from our engineers. So if you pretty wonder how it works under the hood, I bring it to you. So my background is software engineering, so when I see diagram like that, it makes me happy. First, we take a screenshot of this. So this is under the hood. Come to our webinar. We'll talk about this further, but for now, I think this is a great example how deeply we integrated into some of the work in Windows and others. Of course, our work works on Linux as well, if you're interested in having portability.

Speaker 2:

So, I talked about a lot of benefits. I talked about why we're here, why we're in the edge. But what's OpenVINO? Why are we here talking about this and why do I promote this? So think about all the great example you saw can come from different models, right, we can have PyTorch. We still have some TensorFlow model coming in, for example. Some of the old school gesture recognition stuff was from TensorFlow and we have All. Next, of course, we have even Pedal Pedal from China.

Speaker 2:

All these model come into our Intel silicon. Oftentimes they're not optimized right. So that's what OpenVINO has been doing we're to really compress this model, making sure that it fits into that small size Could be. Your small laptop can be on the cloud. That's what we've been doing. Right, we have a portfolio of hardware from Intel. So if you think about Intel, we don't just build GPU, we don't just build CPU, we build a lot of things from the edge to the cloud, to the client. So when we think about this coverage, what OpenVINO has been doing is we bring this optimization code which, under the hood, is one API, all of these kind of like open source standard work we have done. We distribute across different hardware here. In this case we even have MPU different hardware here. In this case we even have MPU, which is a new silicon uh that we provide for uh, high efficiency, low power on the edge devices and then we started that a long time ago. We call the DPU, we call the bofidius. So at one point if you are following us on Raspberry Pi, people plug in those USB key that has actually our NPU built in and then we can get really good acceleration. I think it was like 10 times faster than the Raspberry Pi only five watt of power. So that was like one of those accelerators we provide. Why this is important, I think, to us, is when you have this cross platform, cross OS, cross architecture and then cross models, we have a very powerful tool that you can view many things or anything. If you can imagine and remember.

Speaker 2:

My thesis is about having communication with the computer so you can have use cases that actually were only dreamed about back in old days become a reality, for example, from computer vision. Computer vision has been very from old school YOLO, which is almost 10 years ago. We optimize it really good to natural language model that we talk about. Maybe back in old days we called it BERT as well. We can have now, above and beyond, with simulation of robots chatbot we talked about we can even do voice generation and recognition very well. And then well I think everyone talked about this we can even do code generation, so you can do coding with it.

Speaker 2:

There's a lot of these possibilities are getting um explore and also be deployed on the edge, as we see every day. So today I lost count how many. Back in old days we say we have 1,000 of these models. Today we don't even know. Well, we have this huge pool of open source and cross-platform models that you never see and a lot of time, I think, we always think about when you have all these examples right. This is an open question to everybody here. Where do I put a computer? Honestly, for the many years I work in either. Before intel, I was doing my own startup. Today, putting the computer right place is never an easy question. It's the hard question enough, never an easy answer to it.

Speaker 2:

I said today we figure out some, some of it because when we built the silicon at Intel which is what I like, my job a lot it's like I see this new silicon came out and then, hey, there's things that we couldn't believe it was possible before. For example, when we do depth estimation or post-estimation, back in old days, when you're trying to do it on the CPU, it was very slow, it's not interactive. Today, when you bundle with the MPU, it can be low power, it's real time, it can be simply just like running behind the scene, like in the background for you or your CPU and GPU arresting happy. So there's like possibility that I didn't know, we didn't know it was such a great idea. And especially background blur. You see, if my background blur, it's amazing for the MPU, because it's not well, you can have four hours meetings at one, because it will use a lot less power, and then your battery will be very happy.

Speaker 2:

And think about this compute. Where do I put that? I think this is where I think OpenFIN will give you that flexibility. You can switch between this model, that, that model, and that's where I think the beauty of what we're doing. So I'll take a pause. I'll read some questions in the q a. Then I'll talk about something really hard. I'm gonna switch my self to the browser or take a look, I think uh also we have ash and touch behind the scene.

Speaker 2:

Actually, ash and tasha has been answering questions. She's from open fino, oh no, oh no, from OpenEdge platform. So we work together. And then thank you for sharing some of the links. I'll I guess we'll handle the Q&A. I'll keep going first.

Speaker 2:

But I think people always ask like yeah, so like, how do we use MPU? How do we do that? A lot of time I think you don't have to learn it. That's a beautiful thing about what we're doing. We try to simplify some of the developer workflow. So then we handle the compression and handle the deployment for you and we put an open source library. So if you really care about performance like you look at the clock, I have like only, let's say, two seconds to run this forecast you can actually go into the open source code and figure out where support and XR is needed. In this case, with the library we use is called NNCF. This is the neural network compression framework.

Speaker 2:

It does amazing work on making the model, for example, doing quantizations. So from further two bits to in eight and four, let's say, a large language model. We even do down to info with different algorithm behind it. Why that's important? I think there's a lot of techniques which I actually have a lot of tutorial. Talk about those so we'll talk about this later in the slide so you can learn from what I talk about here. Those so we'll talk about this later in the slide. So you can learn from what I've been talking about here. The key is with the optimization is basically means you're only use half of yourself, half your hardware most of the time or less, because under the hood, when we think about innate is not just the quantization, but under the hood the silicon has the computation unit specialized for the in a instructions. So uh for sion, for example, on the cloud side, uh in a is a must for amx to activate it, which is the mx, the, the fianni at that time, and actually you can also use it for uh different point as well, but for great performance.

Speaker 2:

I think in a was the great starter because a lot of time I talked to real developer. They didn't know you're not losing that much in terms of the accuracy. Sometimes you may be doing better or some computer vision tasks. It's weird, it just happens. You know. Sometimes it actually becomes a denoising to the model because doing the quantization something happened to it. So you have to measure yourself to what you're doing. But that actually is important for use cases where you may only get like 20 frames per second and opposite of that, 2 to 3X boosts you to real time and then you have a very different product as a outcome. And on the edge use case, you cannot put three times more CPU or GPU on it. Sometimes you may want to think about this, right, how do you get two times more? It's actually a lot more expensive in terms of power, in terms of the cost. So the software don't have an estimate. And when we think about this, it's like when we do quantization, when we measure it, especially for YOLO those type, I think you just have to do it that's my word to it, depending on use case, of course, but that's definitely worth your time and then we profile the source code. Here, for example, you can learn and we've done it in Magistrate for you why that's also important for large language model. We talked about chatbot experience, right.

Speaker 2:

One problem that intrigued me a lot was when we started this a couple years ago I think almost two years ago when we first looked into the luxury language model. The first problem we dealt with was not the computational problem. First it didn't even run, it just blew up the memory, especially with GPU at that time, and then it stopped running at all. So we started really figuring out the runtime, how to ensure that we're not making extra copies but at the same time we have. The first step is if you look at the Lama 7 billion, right, if you look at the one 32 bits, I don't know if a mouse works on top of it it used 25 gig of RAM. I don't even have any graphics card that closes that range, unless I go very high expensive ones, and they don't even work that well for 32 bits. So there's a lot of gaps on what the world is giving you, especially on the model side, and what is deployable.

Speaker 2:

So at that time two years ago, it was an interesting time when we had to do a keynote. So behind the scenes, I was one of the lucky ones to hack it, work with the driver team, working with the runtime team, working with the application team, to make one demo to run Lama. And it worked. When the first time I see the computer, the laptop, pumping out I don't know one token per second or something like that, I was like carrying my laptop, bring it to my wife and say oh my God, the computer is talking to me. It's like a human, it's a bit slow, but like it's still answering my questions, be able to put a sentence together. I think that's where I think there's a lot of like. It's not just a hype, it's just like it's real. It's actually solving some of the small problem. But if you can solve these well and then we can collaborate, maybe you can solve it from. I think that's how usually all these chatbots come together.

Speaker 2:

And that's exactly why we have info or even different algorithm. This is actually doing asymmetric group 128, different ratio. We want to measure the impact to the model when we compressing them and why that's important, I think.

Speaker 2:

At the end of the day, ram is ram. You don't just double it overnight, but if you do write software, do the right algorithm, you can reduce it significantly and without huge performance impact into the model accuracy. I think that's a win-win for what we've seen and that's why today, when you go to AIPC, a lot of model runs extremely well. It depends on the engineering work. If you, however, don't know if your model is working, we did put together a lot of different benchmarks. That's what I love about Intel. Sometimes we are very interesting about, like, how we approach things. We just did the work and we didn't talk about a lot. So in here we have one of the largest database of model. Actually, if you go back into the, the IPC thread or some of the model, we have optimized thousands of them for sure. But if you search for just AIPC or some of the model, we have optimized thousands of them for sure. But if you search for just AIPC, you can see how well some of these models can run. Before you fine-tune your stuff or anything, you can predict the outcome, how well this model will run on your machines. I think that's very powerful, especially today.

Speaker 2:

We also work with Microsoft, as I said, right, and many different partners. Some of the model. When we optimize, we actually benchmark for you. So when I say benchmark means we don't just like, hey, it's run this fast, but we benchmark against different parameters. So think about this is almost like a hyper parameter style, right? You just explore all the options in the pool and then look at the impact towards the final outcome. So you go back to the NNCF library. If you look at our documentations, oftentimes you'll see even the model we pre-compressed, pretty much like we try to see where is the sweet spot for you. So where do you find more of this? You can go to hugging phase um, look at openfino um, and then we have a shared repository with some of the pre-compressed model and that way then when you start using intel hardware, you know that you're getting the best software. Take a screenshot of this. This will save you a lot of headaches Well, at least my headaches and it will come through very well when you are prototyping, when you are doing any sort of work related to Intel or not even Intel, maybe even ARM. So I put together some examples so I don't just talk, I create a lot of examples If you're interested in trying what I just talked about, I put together scripts.

Speaker 2:

You can double-click and it will run. It will install, it will download and it will run those three steps in a bat file. So this is the link to it. Enjoy the code. I did that two weeks ago, one week ago. It should still work one year later. I don't know, maybe things will change. But uh, last week I tested and it works really well on your laptops. Make sure you have 32 gig of ram, 32, okay. And when it runs, um, it not just run on cpu, this one will run on mpu as well. So, um, when people ask like, would this run on NPU? Yes, it does. When do I use NPU? Depending on scenarios? So in this case I asked the machine to write a song about AI with emoji. I don't know why, but it worked. It's kind of fun. You can sing with it. So now it's got your limit right, you can be very creative. Build your own DJ, sing a song for you. I think it's fun for the kids these days because they will open up their imaginations.

Speaker 2:

And then for my one-year-old, I got to teach them how to use Gen AI. Now, check out to do something fun, just like have an interesting learning experience, but not fully relying on it, I'm just saying just to have exploration. I think that's where curiosity right, and if you're interested in what we just we just saw here, um, I also feel that we also build on, just I as a team of us, um be a rack demo where you will see how we bundle this with, uh, our partners, like um, I think that partner only this is like all the community of members, right, um, it's like ever anyone that are using the tool, um, like from land chain llama index. We integrate with them. So then you see our presence on openfino in those large model tools and that's important because, um, you don't have to relearn.

Speaker 2:

You can what you have, but you just have to make sure you enable the right components and that's gonna give you the accelerations. And as a homework, I'm closing in two minutes we also have example on the edge side where you can do a GenTech workflow. So this is real. Real like you can talk to a chatbot locally, that you can order some paint and then I don't know why you want to paint your house today or not, but it's a great experience. You can try to see what a chatbot can do locally.

Speaker 2:

This one's a bit slower because we have a lot of extra deep thinking process, so it means we use a very big size chatbot. So it means like we use a very big size chatbot but as a silicon move forward. This is pretty fast already, but it will get even faster on the edge. Make sure you take a screenshot of this one and at the last minute I think I'll do some closing. First, learn in notebooks. That's where you learn a hundred examples, and we have open edge platform that you can use to also learn how to deploy this on edge. So if you Google open edge platform from Intel, you will see it in GitHub as well as a closing as a benefit. Right, I have six benefit here. Two, I think you have to remember Memory. If you don't have memory for PIN that's sufficiently well managed, it will not run well. That's what we learned for the years of trying to optimize it. We got a lot of speed up by learning how to manage our memory. A lot of things are memory bound.

Speaker 2:

And learn how to deploy on different platforms. You never know what people need. Sometimes CPU is busy, sometimes the GPU could be busy. Maybe use the NPU as you're uploading. So we saw gaming as one example. People want to use the NPU for analytics. You don't want to touch your CPU or GPU and then having this as scalable.

Speaker 2:

I think use OpenEdge platform, as I said, is very important. This is the key. Please download and try to give us feedback. We open source Getty, which is an amazing tool for training and deploying image model. We open source many great tool like office sharing different, like Ash notes and all that All the things that I think will take you thousand, maybe hundreds of thousands of hours to build. We put it together into a platform for you and if you try it, you don't like it, that's okay. We'll be here listening and then we'll make it better. That's what happened at OpenFino. Right In the beginning it was a bit rough, but now it's very smooth edge. That's all I can say. If you're really interested in what we're doing in the last minute, contact with us. I think I spend my 25 minutes if I look at my clock correctly. Am I right?

Speaker 1:

Yeah, thanks, great talk, raymond, and thanks also to Ashutosh who followed with the different questions. You know great talk, really. Maybe I have a couple of questions for you, raymond. Really you provided a lot of assets and a comprehensive description of OpenVINO, but you know there are different types of industries small, medium and large industries which are spending time and efforts on familiarizing with what you call conventional AI. And so what's your take on about the messages for these guys, one instead of another to move forward on generative AI, why and how these guys should find time to use OpenVINO and the great methods you described.

Speaker 2:

So I think that's a very good question. Like, it feels like we dropped the ball on the deep learning stuff that we did four years ago, five years ago, right, all of a sudden we have this Gen AI, everyone just talks about it, and what happened to the people who already spent a lot of money and time on the deep learning side? I think, first is, if you're already like doing work that is functional, like license plate detection, we deploy so many of them. We forgot, right, there's a lot of the smart city idea that we use for retail. Uh, they work extremely well. I think those, uh, I think we just need to make them better. Um, so when I say vict means oftentimes, I see JNI keep be the one that opened up the flexibility of some of these approaches A famous one, I would say, for a lot of these.

Speaker 2:

Again, I'll go back to the YOLO one, because a lot of people know it you need to find your model before it works. You need to do a lot of extra work to ensure the model is tuned to the application. They're not as flexible. Right Flexibility is not built into the architecture itself. So for those use cases, I think people should start realizing that this new wave of these approach can create flexibility. I think this is the step. One is when I think of generative.

Speaker 2:

A lot of things are a lot more tunable. I say tunable, like YOLO, for example. If you do it right, you can automate the fine tuning step with the agent approach. And then on top of that I saw the one called YOLO-E. They actually combine the YOLO, which is like hey, you can recognize the duck and the cat. What if you've never seen a banana before? Now, what Right, it's like a banana, it's our track, so we can actually do a prom injection which not injection, prom base one. They basically combine in the tool called Clip, so then they can now recognize something that looks like a banana and then you can say banana on a table. It's like it almost looks like a generative AI. It's like you're generating the model, kind of like structure on the fly, but using the YOLO architecture. So it's very effective, very efficient.

Speaker 2:

So I see the industry should very think about how can I create better efficiency on some of the older stuff we have done? I don't think they obsolete, I think they are still serving a purpose. But how can we reduce some of the time spending on that? And then also I think my title was chatbot how can make it cumulative, make things more interactive? The whole point is not like I don't care if it's a genetic or not. Can I make a good experience from the user? It's a number one rule, and before, with the older one, they're very limited. Now we can create experience. This is, I think, the key turning point. I never felt so alive when I saw the machine talking to me. Right, I think my kid will be like yes, dad, I saw the machine always talking to me.

Speaker 2:

It's like, but I spent the whole 40 years of my life never seen machine talking to me so I don't know people forgot, like right, so it becomes a new norm, but like back then the machine doesn't talk, they just, you know, jiggling on things. I don't know what they're talking about. Anyone use Microsoft Word as a little icon with the cookie things Just never answered my question when I started computing. Anyone use Microsoft Word? At one point there's a little bit help icon. Anybody help me? Not once. But I'm not saying Microsoft was bad, but that time I thought AI was broken because I asked well, how do I do this? It will give me a wrong answer all the time, but now we're getting to close it with the right answer. Close and close every day, anyway. So that's my take.

Speaker 1:

That's fantastic. Thanks, Raymond, for elaborating so much. Maybe another question more related to OpenVINO question more related to open vino imagine I am a young professional and I would like to leave my footprint designing a generative ai model. How open vino allows me to ease this task?

Speaker 2:

to start. I think that the starting point with openVINO is really again like. Openvino is like a step forward to make sure that you get the best performance out, but I think over the year we change a little bit right. We actually have the most number of examples you ever need. If you look at the notebooks, we have about almost 200 of them, and we have our reference design kit. We have another 20 of different examples. I think we built one of the most comprehensive library of things. Like anything you talk about you try it. Most likely you've seen one copy of that on OpenVINO. I think that's like. It's almost like. It gives you that library of material where you can dream without like. When I was doing my PhD, I spent two weeks just to figure out how to run the CUDA code, just to get it running, not even just getting it performing.

Speaker 2:

It's just getting it running, and then I spent two months to get it performing and then I spent a year to make it actually publishable right, so there's a line for idea to executable to actual deployable. It takes a year at that time when I started. And of course, the Kuda is step zero, right, but similarly here, right, but with OpenFino, I think in a couple of weeks I think you'll be able to pull out some amazing demo. Like I've got to tell an industry secret here. Every demo I build takes only two weeks because I have to catch up with the industry. Every single thing is two weeks. In two weeks I can spin something out. How Do I write from scratch, optimizing everything from scratch? No, I have to use a tool. This is a secret.

Speaker 2:

If you are starting new today, start with a fresh ground where you can be competitive with others. I think that's a great starting point and that's why we're doing this. Go to OpenPython Notebooks, make a contributor. So we started with five contributors, including myself. I have 120, something like that. If you want to have a contributor, do Google some of the code with us, like we had some student programs. So learn with the expert. There's a lot of things you can engage with us. I think at Meet your LinkedIn, you will see it. There's a lot of different posts I made about how to engage with Mintel.

Speaker 1:

Yeah, that's great, Also the fact that you mentioned the students' programs, because we need educational contributions for people who are approaching.

Speaker 2:

Yeah, we've done it for four years straight, so, and this is gonna be fifth year next year, so you no-transcript.