This graph shows how many times the word ______ has been mentioned throughout the history of the program.
Chicas, perdonad, perdonad por la portada, ya sé que es confuso, no sabéis a cuál de los dos calvos vais a ver en directo.
Soy yo, DOTCSV, de nuevo in the house, continuando este ciclo de eventos que no hemos parado de tener durante todos estos últimos días.
Tuvimos el evento de OpenAI la semana pasada, que ya casi parece que fue cosa de un mes, dos meses.
Ya OpenAI le ha dado tiempo para pasar por varias polémicas, comentaremos algo de esto hoy.
Hoy tuvimos el evento de Google, un evento largo, cargado con novedades interesantes, pero que quizás no satisfajó, no satisfació, no dormimos bastante.
Es cierto que el evento tuvo algunas cosillas chulas, pero se hizo largo, se nos hizo largo dos horas de evento.
Y ahora tenemos la semana de Microsoft. Satisfizo, gracias.
Ahora tenemos la semana de Microsoft.
Ha sido un poco raro porque yo tenía apuntado desde hace tiempo que hoy era la keynote principal, pero de repente me sorprendió ver ayer que había un evento importante de Microsoft donde se presentaron varias cosas.
Vamos a ver ahora un resumen muy rápido de lo que se presentó ayer y lo comentamos.
Entonces yo pensaba que el evento fue ayer, pero no.
Hoy también hay evento y realmente toda esta semana va a haber eventos y hoy es la keynote principal del Microsoft Build, que es el gran evento de Microsoft anual.
Entonces, vamos a ver primero lo que se presentó ayer, que estuvo más orientado a la parte de hardware.
Hubo novedades interesantes, vimos alguna aparición de tecnología de OpenAI, porque recordemos el matrimonio un poco extraño que también podremos comentar en este directo que está habiendo entre Microsoft y OpenAI.
Sabemos que Microsoft tiene el 49% de la inversión dentro de OpenAI y hoy puede que veamos otras novedades, seguramente más orientadas a servicios de Copilot integrado con otras herramientas.
A lo mejor vemos algo más orientado a developers.
Va a ser muy interesante.
Vamos a ver qué sucede.
Me decís que se oye un poquito bajo.
Decidme cómo se ve todo y con esto empezamos.
Vamos a darle aquí al play.
Vamos a escuchar a Satya Nadella.
Esta es la presentación de ayer, un resumen de lo de ayer.
Y vamos a tener a Satya Nadella, no vamos a tener a Scarlett Johansson.
Luego comentamos también un poco de esto.
Cuando le dé al vídeo, también comentadme por el chat, por favor, si también se escucha bien y está bien modulado tanto mi voz como lo que sería el vídeo.
Esta es la presentación de ayer.
Esto戊ibel es un BluWebY viene mutuo.
Chailed aquí esta es la visión de geben.
Para nosotros esta visión acaba con nuestra molestión y most hassando y nuestra canvaz showingстрía espial Gemeable.
Y es por eso que se ha introducido un nuevo trasrissa de alliante
para cuzar el azul Esto debía llegó el punto de trabajar al entregar el Hiigua.
across the edge and cloud.
We call this new category
Copilot Plus PCs.
Muy bien, vale.
Ayer se presentó, pues esto,
lo primero que hemos visto, el Copilot
Plus PC. Microsoft,
por favor, sé que me estás escuchando, Microsoft.
Dejad de llamar
a todo Copilot. Es
un follón. De hecho, ya lo
dije en Twitter hace un par de semanas,
debería haber un rebranding para que
Copilot no solo sea la herramienta
de programar o el asistente
de Office o el Windows Copilot
y vais y me ponéis
ahora un nuevo
Copilot PC ahora.
Un ordenador. Vamos a verlo.
We've reimagined the entirety of the PC
from the inside out.
We've made a huge leap forward in
technical capability to usher in this
new category. And even with all
this power, these new PCs
are unbelievably thin, light
and beautiful. I'm going to show you
these in a minute, but let's start
first by describing what makes them
unique. First,
they are the most powerful Windows
PCs ever built.
Second, they give you access
to the most advanced AI
available with Microsoft Copilot.
And finally, they're going to introduce
all new productivity, creativity
and communication capabilities that I
think of literally as superpowers.
Let's start first by talking about
performance. Copilot Plus
PCs introduce a new systems
architecture powered by the
CPU, GPU and an all new
NPU running at an incredible
40 trillion operations per second.
40 trillion.
Vale, tenemos un nuevo
PC con un nuevo chip que va a
estar orientado, pues aquí le
llaman NPU, Neural Processor
Unit, me imagino, y va a estar
orientado pues a todo el tema de
procesamiento de la inteligencia
artificial.
We also completely re-architected
Windows 11 on a new minimum
hardware configuration to optimize
performance and battery. The
result, Copilot Plus PCs are the
fastest Windows PCs ever built. And if we
wanted to do a comparison, they're 58%
faster than the most advanced MacBook
Air with the M3 processor.
Fijaos aquí que se están comparando con
el MacBook Air, no con el Pro. Es
cierto que se está comparando con el
chip M3, pero no sé si, claro, que
versión del M3, porque el M3 tiene el
Pro, el Max, el Mega no sé qué, el no
sé cuánto, el Ultra, o sea, hay
varios tamaños.
Vale, importante, esto lo he visto
antes también en Twitter. Han
redefinido la aplicación Copilot para
que haya como una aplicación de
escritorio y es muy parecida. Bueno,
está pasando con todas las interfaces
de chatbots de IA, pero es muy
parecido al formato chat GPT, ¿no?
Barra lateral, con todas las
conversaciones, abajo el menú, con los
diferentes plugins, que son los GPTs.
Bastante, bastante parecido. Pero
bueno, si funciona el formato, pues,
¿para qué cambiarlo?
We're making it more personal. We're
making it faster to get to what you
want. We're also going to make it a
flexible app experience, so you can
resize it, you can snap it, or you can
even maximize it and run it in full
screen. And one of the great things
now when you have Copilot on Windows
is I can now use two windows and I can
literally drag and drop something from
one window to the other, like in this
case with the Northern Lights, and I'm
able to ask it, hey, tell me about this
phenomena that happened in Seattle last
weekend. And I get a great set of
information on it. Bueno, nada
espectacular.
Later this year, you're going to see
Copilot show up in Windows in
settings, where it's going to provide
helpful suggestions for you. So, for
example, here, when you go into
Windows settings and you need to
configure PC with one click, Copilot
will help you. Or you can get things
done in fewer steps when you go to
file. Eso me ha gustado. Eso es un
poco lo que ya se vio en el Microsoft
Build del año pasado, donde tú podías
pedirle en el chat, en la barra lateral,
que parece ser que aquí ha
desaparecido, ¿no? En la demo que nos
presentaron de Windows Copilot, que
decías, cámbiame el entorno a oscuro, o
ábreme Spotify, o cosas así, y más o
menos se ejecutaba, ¿no? Ahora parece
que lo han integrado como botones de
sugerencias en los menús. Está bien.
Esto está en Mac.
O incluso, lo que es increíble, uno de mis
favoritos, dentro de notificaciones
en el mismo tiempo. Así que cuando estás en el
Windows desktop y tienes una notificación,
puedes ahora simplemente clicar en esa notificación
y Copilot, por ejemplo, en este caso,
responderá esa pregunta para responder a un
e-mail con el correcto PowerPoint,
solo con un clic. Y estamos muy
emocionados de traer la nueva tecnología de los
partenarios de nuestros partenarios de OpenAI
para Copilot.
Vale.
Estoy contento de compartir que GPT-4O,
con capabilidades increíbles textos, voz y
visión, estará pronto disponible en Copilot.
Déjame mostrarles una previsión.
Bastante interesante. La aparición de GPT-4O
dentro del sistema operativo. Esto es interesante
porque ya vimos la semana pasada que OpenAI está
apuntando a las aplicaciones de escritorio.
Presentaron la aplicación de Mac, seguramente para
dejar a Microsoft que presentaran su propia
aplicación en el evento. Y la integración con GPT-4O
a día de hoy quizás no es tan evidente, pero la
evolución más cercana que veremos en los próximos
meses, seguramente, sea que ya no solamente
trabaje en local y ya está, sino que también pueda
controlar el sistema operativo. Eso es un poco lo que
tiene sentido. A día de hoy, pues, la demo que nos van a
enseñar ahora es simplemente la aplicación
percibiendo lo que está ocurriendo en pantalla
y tú teniendo un diálogo con el propio asistente.
Pero en un futuro, lo siguiente que veremos será
la ejecución, ¿no? La parte de agencia, de agentes
autónomos que puedan ejecutar instrucciones que
nosotros le pidamos.
Un de nuestros desarrolladores, Ryan, simplemente
compartió su pantalla con Copilot mientras jugaba
Minecraft. Ahora, lo que vas a ver aquí es dos
voces. El primero será Ryan, el segundo será Copilot.
Y él va a estar teniendo una conversación natural como
él haría con un amigo mientras jugaba.
Let's watch this experience.
Hey Copilot, how's it going?
Hey, I'm doing great and it looks like you're
about to dive into some Minecraft. Ready to build,
explore, and maybe dog a few mobs?
Yeah, I'm trying to figure out how to make a sword.
My son plays this game all the time, and I have no idea
what I'm doing.
No worries.
I'm here to help you impress your son.
I see you've got a crafting table in front of you,
which is a great start.
To craft a sword, you'll need some materials.
Can you open your inventory?
Just press E on your keyboard.
Yeah, do I have what I need?
Let's see. You've got some sticks.
Perfect for the sword's handle.
But it looks like you're missing the material for the blade.
You can use wood, stone, iron, gold, or diamond.
All right, let's pause it there for a second.
Did you see how Copilot not only could understand
that he was in Minecraft, but if...
Pensad esto con todas las aplicaciones que tengáis.
O sea, yo, por ejemplo, hoy estaba haciendo animaciones
en After Effects, ¿vale?
Estaba trabajando aquí para el próximo video.
Tengo dudas de repente del programa y me voy a ChatGPT para buscar
porque ya en Google me cuesta un montón encontrar.
Me lleva a foros de 2014 y eso es muy antiguo.
Imaginad poder tener una conversación, un asistente que en todo momento
esté atendiendo lo que estás haciendo y que pueda responder en tiempo real.
Increíble.
You can also understand Ryan's request for help and assess the inventory on his screen.
Well, it gets better.
As Ryan continues, he gets a surprise that Copilot amazingly identified
and it tailored its interaction.
Let's watch.
Let's start by gathering some wood or stone.
Shall we head out and find some resources?
Yeah, that sounds great.
Oh my god, what's that?
Oh no, it's a zombie.
Run.
You need to get away from it fast.
Either sprint away or quickly build a tower of blocks straight up to get out of its reach.
If you can find a shelter or dig into the side of a hill, do it now.
Just make sure it can't reach you.
All right.
I think I might be safe now.
That was a close one.
Great job finding shelter.
So that was honestly for us.
Yeah.
Yeah.
That was honestly for us.
Yeah.
For us I was honest.
How have I been able to play Minecraft, eh?
Que saben que me gusta, eh?
Es que esta gente...
Madre mía.
It was honestly a magic moment to discover how Copilot with GPT-40 could immediately identify
and adapt and be helpful on a screen that it had no knowledge of until it saw it.
In this case, a surprise zombie spawn.
And I love this example because it demonstrates how Copilot can become a...
Imaginad, por ejemplo, una versión de este asistente pero que sea súper irónica.
En plan que te aparezca el zombie y te mata y te diga, dios, eres un loser.
Eh, GG.
Eh, wow, increíble, eh?
Buenísima partida.
Crack.
Natural and helpful part of your entire Windows experience.
Every Copilot plus PC will have the building blocks needed to deliver outstanding performance,
including 16 gigabytes of RAM, a minimum of 256 gigabytes of SSD storage, an integrated
neural processing unit, or an NPU.
NPUs specialize in the matrix math needed for AI.
And the NPUs on Copilot plus PCs are the most powerful in the PC category.
Capable of over 40 trillion operations per second, this is a step change in AI performance,
allowing AI that is lightning fast and always working, even with multiple AI workloads running concurrently.
Not only that, these NPUs barely sip a few watts of power, even when fully utilized.
We have brought that same focus on efficiency to every part of the Copilot plus PCs.
We were relentless in finding ways to reduce power consumption system wide.
The result is unprecedented bad in your life with all of your AI.
Today we're debuting Copilot plus PC.
So mola mucho.
Esto es un poco lo que pasa también con los chips de Apple.
Y para mí fue la decisión de compra de comprarme un MacBook Pro.
Es los chips nuevos basados en la nueva arquitectura que tienen un consumo energético bajísimo.
Hasta ahora Apple era la única que lo estaba ofreciendo en portátiles así Pro.
Y ahora parece que Microsoft se suma y la verdad que bienvenido sea.
Copilot plus PCs on the ARM architecture with the Snapdragon X series.
And we were excited to partner with Qualcomm to design this entirely new chip together.
And we rebuilt Windows 11 from the inside out for ARM.
As a result, Copilot plus PCs on Snapdragon X will deliver up to 15 hours of web browsing
and up to 22 hours of local video playback that is 20% more than the latest MacBook Air 15-inch.
In fact, that's right.
Copilot plus PCs outperformed the MacBook Air 15-inch by up to 23% on peak performance.
And more importantly, up to 58% on sustained multi-threaded throughput performance.
Alongside our partners, we are incredibly excited to enter this new era of AI
and show you what Copilot plus PCs can do.
Introducing the first ever Copilot plus PCs from Surface.
The all new Surface laptop and Surface Pro.
They have been designed and engineered to be the ultimate Copilot plus PCs.
And these are the most powerful laptop and pro we have ever built.
They come with the longest battery life.
If you are wondering, this is the presentation of ayer, it is a summary.
Now it will start the presentation of today in 5 minutes.
And we will start.
This is to put a little bit of what happened yesterday.
Although I think the most interesting thing was just the Microsoft
with GPT-4O.
And I want to see if, before we start, we can see...
No sé si está por aquí o si nos lo van a enseñar hoy.
Si no, lo comento ya en alto y pasamos a la presentación de hoy.
Que es el tema de la aplicación Recall.
Esto es súper, súper curioso.
Porque una de las funcionalidades nuevas que parece que se van a sumar a Windows.
Es una funcionalidad, y quiero que me digáis vosotros en el chat a ver qué pensáis.
Donde vais a tener al ordenador sacando capturas de pantalla en el tiempo.
Captura, captura, captura, captura, captura.
Y almacenándolas en un timeline que te va a permitir volver para atrás y revivir momentos de...
Ah, que yo hace 5 días, pues yo qué sé.
No sé.
Es que tampoco sabría decir para qué quiero buscar, ¿no?
Que hice en el pasado a lo mejor un documento perdido.
A lo mejor algo que, no sé, que ha desaparecido.
No tengo exactamente claro para qué se quiere hacer eso a día de hoy, ¿vale?
O sea, no entiendo mucho qué quieren exactamente ofrecerle al usuario.
Porque la posibilidad de poder recordar qué hizo en unas sesiones pasadas, bueno, puede estar bien.
Pero tampoco es súper, súper fascinante.
Va a funcionar obviamente con IA, con búsqueda semántica de imágenes, de texto.
Pues puede haber una imagen, el ejemplo que se veía, ¿no?
¿Dónde estuve buscando un bolso marrón?
Y te lleva el momento del tiempo en el que lo buscaste.
Eso estaba guay.
O sea, bueno.
De hecho, creo que es problemático porque ha saltado mucho la sensibilidad de la gente con el tema de la privacidad.
De nuevo, aquí explican cómo pasó en el evento de Google de la semana pasada
cuando se nos puso un poco la piel de gallina cuando hablaban de el uso de análisis de conversación para ver si te estaban estafando.
Que luego decían, pero tranquilos, que la conversación se procesa todo en el dispositivo.
Con los chips nuevos, todo en local, Edge AI, no os preocupéis por la privacidad, ¿vale?
Pero aún así es algo que causa rechazo de primeras.
¿Por qué creo yo que puede ser interesante o por qué creo yo que pueden estar dando este movimiento?
Porque en un futuro va a tener sentido, cuando todo el tema de los agentes autónomos empieza a desplegarse y similar,
tener recopilado información de cómo el usuario interactúa con el ordenador en el tiempo.
Creo que eso va a ser necesario tenerlo.
De la misma forma que a día de hoy, si yo digo,
¿os imagináis un dispositivo que esté grabando 24-7 todas las conversaciones que tienes en tu día a día?
Mucha gente se asustaría, pero es una cosa que en un futuro cercano, por no decir ya,
tiene sentido porque la inteligencia artificial ahora permite bucear por toda esta información
y nos permite sacar, pues, oye, ¿cuándo dije tal cosa?
O hazme un resumen de las conversaciones que he tenido.
O ¿qué conversación tuve en el trabajo?
¿Cuáles son los puntos importantes?
Entonces, según la IA nota un nuevo potencial a la hora de surfear o trabajar con nuestros datos,
también se va avanzando en la necesidad de capturar y tener una telemetría más alta
en tiempo real de nuestras vidas, ¿no?
Entonces, como ha pasado siempre con la tecnología, se desbloquea potencial,
pero al mismo tiempo se está desbloqueando, pues, problemas de privacidad
que hay gente que no le importa y hay gente que le preocupa muchísimo.
Creo que Windows está evolucionando un poco a esto, ¿no?
La idea del recall creo que va por ahí, el tener una especie de historial de cómo evoluciona,
cómo trabaja el usuario en el ordenador, qué va viendo y tal,
para luego poder ofrecer una experiencia, pues, no sé si de agentes o no sé exactamente para qué,
pero que vaya en esa línea, ¿vale?
Entonces, no sé cómo acabará eso, pero he visto bastante debate hoy con esa funcionalidad.
Pero bueno, vamos a pasarnos al directo.
Vamos a irnos al Microsoft Build 2024 que va a empezar en pocos minutos.
Vamos a darle aquí a Live.
Y la Keynote va a empezar en breve, ahora sí.
Ahora sí estamos a puntito de empezar.
Voy a cambiar aquí la cámara para no molestaros tanto.
Y esto está a punto de empezar.
La verdad, no tengo ni idea de qué van a presentar hoy.
No sé cómo va a evolucionar esto.
Ciertamente, Microsoft tiene mucho para presentar.
Y espero que no salga tan larga como la de Google de la semana pasada.
Chicos, chicas, esto empieza ya.
Esto es en directo ahora.
Todo lo que vamos a ver aquí es nuevo a partir de ahora.
Que bonito.
Una cosa que hay que decirle a Microsoft o hay que reconocerles que tienen un equipo de diseño que me encanta.
O sea, las animaciones que tienen me parecen brutales.
Siempre las disfruto mucho.
Está la calidad al máximo y aun así se ve un poquito malo el stream.
Pero creo que es muy interesante.
¿Uno?
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
Gracias.
This is a new way to analyze with a personal coach.
We're going to be able to have not only productivity gains, but insights serve to us near real time.
Generative AI can learn from the data to help improve the farmer productivity.
AI is unlocking creativity for us all.
Descripciones are so detailed.
In my imagination, I can create some more.
I can create some more.
With expertise.
With expertise.
Good morning.
It's fantastic to be back here at Microsoft Build.
Welcome to everyone here and joining us on the web.
You know, developer conferences are always most exciting, most fun when there's these fundamental changes that you can sense in the air.
You know, I've marked all my adult life by coming to PDCs and bills.
No hay subtítulos en español, chicos.
No se puede traducir.
You know, distinctly, the first time, WIND32 was discussed.
I guess it was 91.
Dot net.
Azure.
Right?
These are moments that I've marked my life with.
And it just feels like we're yet again at a moment like that.
It's just that the scale, the scope is so much deeper, so much broader this time around.
Right?
Every layer of this tech stack is changing.
You know, from everything from the power draw and the cooling layer of the data center to the NPUs at the edge are being shaped by these new workloads.
Right?
These distributed, synchronous, data parallel workloads are reshaping every layer of the tech stack.
But if you think about even going all the way back to the beginning of modern computing, say, 70 years ago, there have been two real dreams we've had.
First is, can computers understand us instead of us having to understand computers?
And second, in a world where we have these ever increasing information that of people, places and things, right?
So as you digitize more artifacts on people, places and things and you have more information.
Estoy buscando la página de Microsoft en español.
¿Qué tienen subtítulos?
¿Qué tienen subtítulos?
We have to reason, plan, and act more effectively on all that information.
Those are the two dreams that we've had for the last 70 plus years.
And here we are.
I think that we have real breakthroughs on both fronts.
The core underlying force, one of the questions I always ask myself is like, okay, this is great.
This is like maybe the golden age of systems.
What's really driving it?
I always come back to the scaling laws, just like Moore's law, you know, helped drive the information revolution.
The scaling laws of DNNs are really, along with the model architecture, interesting ways to use data, generate data, are really driving this intelligence strategy.
You could say Moore's law was probably, you know, more stable in the sense that it was scaling at maybe 15 months, 18 months.
We now have these things that are scaling every six months or doubling every, you know, six months.
You know, what we have, though, with the effect of these scaling laws is a new natural user interface that's multimodal.
That means supports text, speech, images, video as input and output.
We have memory that retains important context, recalls both our personal knowledge and data across our apps and devices.
We have new reasoning and planning capabilities that helps us understand very complex context and complete complex tasks, right, while reducing the cognitive load on us.
But what stands out for me as I look back at this past year is how you all, as developers, have taken all of these capabilities and applied them, quite frankly, to change the world around us.
Right?
I will always remember this moment in January 2023 when I met a rural Indian farmer who was able to reason over some government farm subsidies that he had heard about on television using GPD 35 and his voice.
It was remarkable, right?
It was remarkable, right?
For me, it just brought home the power of all of this because a frontier model developed in the West Coast to the United States just a few months earlier was used by a developer in India to directly improve the life of a rural Indian farmer.
The rate of diffusion is unlike anything I've seen in my professional career and it's just increasing.
In fact, earlier this month I was in Southeast Asia, I was in Thailand where I met a developer and I was having a great round table and he was talking to me about how he's using 5.3 and GPT-4.
And he was using 5.3 to just optimize all the things he was doing with RACs.
I mean, this is crazy.
I mean, this is unbelievable.
It had just launched a few weeks earlier and I was there in Thailand, in Bangkok, listening to a developer talk about this technology as a real expert on it.
So it's just great to see the democratization force, which we love to talk about, but to witness it has just been something.
And this is, quite frankly, the impact of why we are in this industry and it's what gives us, I would say, that deep meaning in our work.
So I want to start, though, with a very big thank you to every one of you who is really going about bringing about this impact to the world.
Thank you all so very much.
I'm going to start putting a lot here.
Who is really going about bringing about this impact to the world.
Thank you all so very much.
They're very small.
You know, when I think about what progress we've made even since last time we were here at Build, we've built really three platforms.
The first is Microsoft Copilot, which is your everyday companion.
It puts knowledge and expertise at your fingertips, helps you act on it, and we built the Copilot stack so that you can build your AI applications and solutions and experiences.
And just yesterday, we introduced a new category of Copilot plus PCs, the fastest AI-first PCs ever built.
All three of these things are exciting platforms, but I want to start with Copilot plus PCs.
You know, we're exposing AI as a first-class namespace for Windows.
This week, we're introducing the Windows Copilot runtime to make Windows the best platform for you to be able to build your AI applications.
You know, what Win32 was to graphical user interface, we believe the Windows Copilot runtime will be for AI.
It starts with our Windows Copilot library, a collection of these ready-to-use local APIs that help you integrate into your new experiences all of the AI capabilities that we shared yesterday.
Now, this includes no code integrations for studio effects, things like creative filters, teleprompter, voice focus, and much more.
But, of course, if you want to access these models itself, right, you can directly call them through APIs.
We have 40 plus models available out of the box, including Fisilica, our newest member of our small language family model, which we specifically designed to run locally on your NPUs on Copilot plus PCs.
You know, the other thing is that the Copilot library also makes it easy for you to incorporate RAG inside of your applications on device data.
It gives you the right tools to build a vector store within your app. It enables you to do that semantic search that you saw with Recall.
But now you can, in your own application, construct these prompts using local data for RAG applications.
Now, I'm so thrilled to announce as well today that we will be natively supporting PyTorch and new WebNN framework through Windows Direct ML.
Muy bien. PyTorch nativo.
You know, native PyTorch support means thousands of OSS models will just work out of the box on Windows, making it easy for you to get started.
In fact, with WebNN, web developers finally have a web native machine learning framework that gives them direct access to both GPUs and NPUs.
In fact, last night I was playing with it, turning it on an edge and seeing that the WebNN sample code running.
It's just so cool to see it, you know, now using even the NPUs.
Both PyTorch and WebNN are available in developer preview today. Let's take a look.
Okay.
What we were talking about before is the Recall.
This is also an actualization of Paint, the co-creator.
Subtitles automáticos que nos vendrían genial.
Aplicaciones de accesibilidad.
Subtitles automáticos que nos tend schemes de la refundación por acné.
¡Gracias!
Today, we are introducing more than 50-plus new products and partnerships to create new opportunity for you.
We've always been a platform company, and our goal is to build the most complete end-to-end stack from infrastructure to data to tooling to the application extensibility
so that you can apply the power of this technology to build your own applications.
And so today I want to highlight our top news for this event across every layer of this co-pilot stack.
So let's dive right in with the infrastructure.
You know, we have the most complete scalable AI infrastructure that meets your needs in this AI era.
We're building Azure as a world computer.
We have the most comprehensive global infrastructure with more than 60-plus data center regions, more than any other cloud provider.
Over the past year, we've expanded our data center regions and AI capacity from Japan to Mexico, from Spain to Wisconsin.
We're making our best-in-class AI infrastructure available everywhere.
And we're doing this with a focus on delivering our cloud services sustainability.
In fact, we're on track to meet our goal to have our data centers powered by 100% renewable energy by next year.
Yeah.
You know, we're optimizing power and efficiency across every layer of the stack from the data center to the network.
Our latest data center designs are purpose-built for these AI workloads so that we can effectively and responsibly use every megawatt of power to drive down the cost of AI and the power draw.
And we're incorporating advanced data center cooling techniques to fit the thermal profile of the workloads and match it to the environment and the location where it operates.
So that we're using our data center layer.
And we're using our data center layer.
We're dynamically able to map workloads to the best accelerated AI hardware so that we have the best performance.
And our custom I.O. hardware and server designs allow us to provide dramatically faster networking, remote storage, and local storage throughput.
You know, this end-to-end approach is really helping us get to the unprecedented scale.
In fact, last November we announced the most powerful AI simple computer in the cloud for training.
Using just actually a very small fraction of our cloud infrastructure.
And over the past six months, we've added 30 times that super computing power to add.
Yeah, it's crazy to see the scale.
And of course, we're not just scaling training our fleets, we're scaling our influence fleet around the world, quadrupling the number of countries where Azure AI services are available today.
And it's great to see that.
At the heart of our AI infrastructure are the world's most advanced AI accelerators, right?
We offer the most complete selection of AI accelerators, including from NVIDIA and AMD, as well as our own Azure Maya, all dynamically optimized for the workloads.
That means whether you're using Microsoft Copilot or building your own Copilot apps, we ensure that you get the best accelerator performance.
at the best cost.
For example, you know, you see this in what has happened with GPT-4, right?
It's 12x cheaper and 6x faster since it launched.
And that's, you know, the type of progress you can, you know, continue to see how, you know, to continue to see the progress as we evolve the system.
It all starts, though, with this very deep, deep partnership with NVIDIA, which spans the entirety of the co-pilot stack, across both of all worlds.
All of their hardware innovation, as well as the system software innovation.
Together, we offer Azure confidential computing on GPUs to really help you protect sensitive data around the AI models end to end.
We're bringing, in fact, the latest H200s to Azure later this year.
It's Blackwells.
And we will be among the first cloud providers to offer NVIDIA's Blackwell GPUs B100s as well as GB200 configurations.
And we're continuing to work with them to train and optimize both large language models like GPT-4-0 as well as small language models like the 5.3 family.
Now, beyond the hardware, we are bringing NVIDIA's key enterprise platform offerings to our cloud, like the Omniverse cloud and DGX cloud to Azure, with deep integration with even the broader Microsoft cloud.
For example, NVIDIA recently announced that their DGX cloud integrates natively with Microsoft Fabric, that means you can train those models using DGX cloud with the full access to fabric data.
And Omniverse APIs will be available first on Azure for developers to build their industrial AI solutions.
We're also working with NVIDIA, NIM industry-specific developer services and making them fantastic on Azure.
So, a lot of exciting work with NVIDIA.
Now, coming to AMD, I am really excited to share that we are the first cloud to deliver general availability of VMs based on AMD, AMI 300X,
AI accelerator.
It's a big milestone for both AMD and Microsoft.
We've been working at it for a while.
And it's great to see that today, as we speak, it offers the best price performance on GPT-4 inference.
And we'll continue to move forward with Azure Maya.
In fact, our first clusters are live.
And soon, if you're using Copilot or one of the Azure OpenAI services,
some of your prompts will be served using Maya hardware.
Now, beyond AI, our end-to-end systems optimization also makes cloud-native apps and the development of cloud-native apps better.
Right?
Six months ago is when we announced our first general-purpose ARM-based compute processor, Microsoft Cobalt.
And today, I am really excited to announce the public preview of Cobalt-based VMs.
You know, Cobalt is being used for video processing and permissions management in Microsoft 365, helping power billions of conversations on services like Microsoft Teams already.
And we're delivering that same ARM-based performance and efficiencies to many customers, in fact, including Elastic, Mongo, Siemens, Snowflake, and Teradata.
In our most recent benchmark data and tests, our Cobalt 100 VMs delivered up to 40% better performance than any other generally available ARM-based VMs.
So, we are very, very excited about Cobalt getting into the market.
Now, let's move up the stack to the foundation models.
Venga.
Modelos.
Look, with Azure AI, we offer the broadest selection of Frontier and open source models, including LLMs and SLMs.
So, you can choose the model that makes the most sense for your unique needs and your application needs.
In fact, more than 50,000 organizations use Azure AI today.
Yeah.
It's great momentum.
And it all starts, though, with our most strategic and most important partnership with OpenAI.
Que entre Scarlett Johansson.
Just last week, OpenAI announced GPT-40, their latest multimodal model, which was trained on Azure.
It's an absolute breakthrough.
It has text, audio, image, and video as input and output.
It can respond and just have a human-like conversation that's fast and fluid.
It can even be interrupted mid-sentence.
GPT-40 is also the top-performing model on benchmarks across a variety of modalities.
And it's always going to get you to the best answer.
It has state-of-the-art performance at understanding the information you actually provide in your prompt,
because that's really what matters.
What OpenAI demoed last week, I mean, that was just pretty magical stuff.
And yesterday, we showed how Copilot will leverage GPT-40.
In the video, I'm just going to play.
Notice that you can share your screen or session as a prompt in Copilot
and have Copilot help you in whatever it is that you're doing.
Let's roll the video.
Venga, otro ejemplo.
Ah, el del Minecraft, otra vez.
No, hombre, no.
Hey, Copilot, how's it going?
No, hombre, no.
Hey, I'm doing great, and it looks like you're about to dive into some Minecraft.
Ready to build, explore, and maybe dodge a few mobs?
Lo hemos visto antes.
Yeah, I'm trying to figure out how to make a sword.
My son plays this game all the time, and I have no idea what I'm doing.
No worries.
I'm here to help you impress your son.
I see you've got a crafting table in front of you, which is a great start.
To craft a sword, you'll need some materials.
Can you open your inventory?
Just press E on your keyboard.
Yeah.
Do I have what I need?
Let's see.
You've got some sticks, perfect for the sword's handle, but it looks like you're missing the material for the blade.
You can use wood, stone, iron, gold, or diamond.
Let's start by gathering some wood or stone.
Shall we head out and find some resources?
Yeah, that sounds great.
Oh my God, what's that?
Oh no, it's a zombie.
Run.
You need to get away from it fast.
Either sprint away or quickly build a tower of blocks straight up to get out of its reach.
If you can find a shelter or dig into the side of a hill, do it now.
Just make sure it can't reach you.
All right.
I think I might be safe now.
That was a close line.
Great job finding shelter.
It's really cool to see that.
Yeah, but this one we've seen.
You know, the magic here is that Copilot is your personal assistant that's helping you with any task,
helping you stay quite frankly in the flow in the moment, whether it's a game or a document or a line of code.
And as OpenAI innovates, our promise is that we will bring all that innovation to Azure, too.
In fact, the same day that OpenAI announced GPD 4.0, we made the model available for testing on Azure OpenAI service.
And today, I'm excited to say that it's generally available on Azure AI.
A ver.
Ojo, eh. A lo mejor...
A lo mejor es el modo voz.
¿Está habilitada la multimodalidad, de verdad?
Porque eso no lo tiene OpenAI habilitado.
Ojo, eh.
Ojo, eh.
Ojito.
All right, let me ask the agent here.
Adento, eh.
Hey, Seth.
How's it going?
What can I help you with today?
I am fantastic.
I'm sorry.
I am in such a hurry.
I'm going to go on an overnight camping hike where we're camping and all this.
It's going to be dark, cold.
Should I...
Should I use these shoes?
It's the mountains.
It's probably going to be cold.
Great, no worries.
Let me take a look.
Ah, those are the Trek Star hiking sandals.
While they're fantastic for summer hikes and keeping your feet cool, they might not be the best choice for cold mountain conditions.
Oh my goodness.
Okay, I'm sorry to interrupt you.
Pick the best thing that I should use and can you just add it to my cart?
Just one product.
No worries.
Let's go with the Trek Ready Hawking Boots.
They're durable, provide great ankle support, and most importantly, they'll keep your feet warm in the cold mountains.
I'll add them to your cart now.
Ready for checkout?
I see that.
Amazing.
Muchisimas gracias.
Espero que me vaya bien en el campamento.
De nada, Seth.
Esfruta de tu campamento y me entente caliente con esas botas.
Buena suerte y diviértete en los montañas.
Esa voz no es tan buena.
While Seth was...
Esa voz en español.
You know, Seth was in a hurry, but there you go.
GPD4O kept up with him.
It's incredible.
I really want to thank the OpenAI team for their partnership and really the responsible approach to innovation, helping our industry move forward.
Sam will be here, in fact, joining Kevin in a little bit to talk a lot more about what's coming, right?
Because that's the exciting stuff.
How do you all sample what comes next?
We also brought...
We're bringing lots and lots of other models as well from Cohere and Databricks and Desi, Meta, Mistral, Snowflake, all to Azure AI.
We want to support the broadest set of models from every country, every language.
I'm excited to announce, in fact, we're bringing models from Cohere, G42, NTT Data, Nixler, as well as many more as models as services, because that's the way you can easily get to managed AI models.
And we all love open source, too.
In fact, two years ago, at Build, we were the first to partner with Hugging Face, making it simple for you to access the leading open source library with state-of-the-art language models via Azure AI.
And today, I'm really excited to announce that we're expanding our partnership, bringing more models from Hugging Face, with text generation inference, with text embedding inference, directly into Azure AI Studio.
And we're not stopping there.
We're adding not just large language models, but we're also leading the small language revolution, small language model revolution.
You know, our 5-3 family of SLMs are the most capable and most cost-effective.
They outperform models of the same size or the next size up, even, across a variety of language, reasoning, coding, as well as math benchmarks.
If you think about it, by performance to parameter count ratio, it's truly best in class.
And today, we are adding new models to the 5-3 family to add even more flexibility across that quality cost curve.
We're introducing 5-3 Vision, a 4.2 billion parameter multimodal model with language and vision capabilities.
It can be used to reason over real-world images or generate insights and answer questions about images, as you can see right here.
Yeah.
And we're also making a 7 billion parameter 5-3 small and a 14 billion parameter 5-3 medium models available.
With 5i, you can build apps that span the web, Android, iOS, Windows, and the Edge.
They can take advantage of local hardware when available and fall back on the cloud when not.
Simplifying, really, all of what our VS developers have to do to support multiple platforms using one AI model.
Now, it's just awesome to see how many developers are already using 5-3.
You know, do incredible things.
So, from AVD solutions to the Thai company, that I mentioned earlier, the ITC, which has been built a co-pilot for Indian farmers to ask questions about their crops.
Epic in healthcare, which is now using 5i to summarize complex patient histories more quickly and efficiently.
And another very, very cool use case is in education.
Today, I'm very thrilled to announce a new partnership with Khan Academy.
We'll be working together to use 5-3 to make math tutoring more accessible.
And I'm also excited to share that they'll be making Khan Amigo, their AI assistant, free to all U.S. teachers.
Let's roll the video.
I felt like I was in a place in my teaching career where I felt like I was kind of losing my sparkle.
And I would just feel really defeated when I looked out on the classroom and I would see students that just didn't look engaged.
Teachers have an incredibly hard job.
And what we think we can do is leverage technology to take some of the stuff off of their plate to really actually humanize the classroom.
By some miracle, we became a Khan Amigo pilot school.
With new advances in generative AI, we launched Khan Amigo.
The point is to be that personalized tutor for every student and to be a teaching assistant for every teacher.
I started to build these more robust lessons and I started to see my students engage.
We're working with Microsoft on these PHY models that are specifically tuned for math tutoring.
If we can make a small language model like PHY work really well in that use case, then...
But I would like to kind of shift the traffic to PHY in those particular scenarios.
Using the small language model, the cost is a lot lower.
We're really excited that Khan Amigo, especially in the partnership with Microsoft, being able to give these teacher tools for free to US teachers is going to make a dramatic impact in US education.
I think we're going to make them the innovators, the questioners.
Isn't that really just why you wake up every morning, right?
Because that's our future, our next generation.
And to me, that's everything.
Ahora creo que me escucháis mejor.
Ahora sí, ¿no?
You know, I'm super excited to see the impact this all will have and what Khan Academy will do.
And Sal's going to, in fact, join Kevin soon to share more.
And I'm really thankful for teachers like Melissa and everything that they do.
Thank you very much.
You know, of course, it's about more than just models.
It's about the tools you need to build these experiences.
With Azure AI Studio, we provide an end-to-end tooling solution to develop and safeguard the co-pilot apps you build.
We also provide tooling and guidance to evaluate your AI models and applications for performance and quality,
which is one of the most important tasks, as you can imagine, with all these models.
And I'm excited to announce that Azure AI Studio now is generally available.
It's an end-to-end development environment to build, train and fine-tune AI models and do so responsibly.
It includes rebuilt-in support, what is perhaps the most important feature, which is in this age of AI, which is AI safety.
Well, it's an end-to-end.
It's an end-to-end.
It's an end-to-end.
It's an end-to-end.
You know, to everything, from detecting hallucinations in model outputs, risk and safety monitoring.
It helps understand which inputs and outputs are triggering content filters, prompt shields, by the way, to detect and block these prompt injection attacks.
And so today we are adding new capabilities, including custom categories, so that you can create these unique filters for prompts and completions with rapid deployment options,
which I think is super important as you deploy these models into the real world if an emerging threat appears.
Beyond Azure AI Studio, we recognize that there are advanced applications where you need much more customization of these models for very specific use cases.
And today I'm really excited to announce that Azure AI custom models will come, giving you the ability to train a custom model that's unique to your domain, to your data.
That's perhaps the priority of the same builders and data scientists who have been working with OpenAI brought all the five advances to you, will work with all of you to be able to build out these custom models.
The output will be domain-specific, it will be multi-task and multi-modal, best-in-class as defined by benchmarks, including perhaps even specific language proficiency that may be required.
Now let's just go up the stack to data.
Ultimately, in order to train, fine-tune, ground your models, you need your data to be in its best shape.
And to do so, we are building out the full data estate, right, from operational stores to analytics in Azure.
We've also added AI capabilities to all of our operational stores, whether it's Cosmos DB or SQL or PostgreSQL.
At the core, though, of the intelligent data platform is Microsoft Fabric.
We now have over 11,000 customers, including leaders in every industry, who are using Fabric.
It's fantastic to see the progress.
With Fabric, you get everything you need in a single integrated SaaS platform.
It's deeply integrated at its most fundamental level with compute and storage being unified.
Your experience is unified. Governance is unified.
And more importantly, the business model is unified.
And what's also great about Fabric is that it works with data anywhere, right?
Not just on Azure, but it can be on AWS or on GCP or even on your on-premise data center.
And today we are taking the next step.
We are introducing real-time intelligence in Fabric.
We are introducing real-time intelligence in Microsoft Fabric.
An end-to-end solution empowering you to get instant action.
able insights on streaming data.
At its heart lies a central place to discover, manage, and consume event data across your entire organization.
With a rich, governed experience.
Get started quickly by bringing in data from Microsoft sources and across clouds with a variety of out-of-the-box connectors.
Route the relevant data to the right destination in Fabric using a simple drag-and-drop experience.
Explore insights on petabytes of streaming data with just a few clicks.
Elevate your analysis by harnessing the intelligence of Co-Pilot in Microsoft Fabric.
Using simple natural language.
Make efficient business decisions in the moment with real-time, actionable insights.
And respond to changing landscapes proactively.
Allow users to monitor the data they care about, detect changes to the current impact of the data.
¿qué onda?
Es más fácil diseñar, construir y interoperar con Fabric,
con sus propias aplicaciones, ¿verdad?
Y, en realidad, estamos construyendo una nueva plataforma de
plataforma con el kit de desarrollo de trabajo de fabric,
para que las personas como ESRI, por ejemplo,
que han integrado sus analíticas espaciales con Fabric,
para que los clientes puedan generar insights
de sus propias locales, usando ESRI,
con herramientas y libras, justo en Fabric, ¿verdad?
Esto es emocionante ver.
Mola, pero todo esto es más para empresas.
Seguramente hay gente en el chat que está comentando que lo usan y
que es guay.
Y, beyond Fabric, we're integrating the power of AI
across the entirety of the data stack.
There's no question that RAG is core to any AI powered
application, especially in the enterprise today.
And Azure AI Search makes it possible to run RAG at any scale
delivering very highly accurate responses using the state of the
art retrieval systems.
In fact, ChatGPT supports for, you know, where GPT's,
their assistance API, are all powered by Azure AI Search today.
And with built-in One Lake integration, Azure AI Search will
automatically index your unstructured data too.
And it's also integrated into Azure AI Studio to support
your own embedding model, for example.
And so it's pretty incredible to see Azure Search grow over the
Azure Search and that's very poor developer service.
Now let's go on to the development tools.
And here we are, once again, redefining software development.
GitHub Copilot was the first, I would say, hit product of this
generative AI AI.
And it's the most widely adopted AI developer tools.
And GitHub Copilot, we are empowering every developer on the planet to be able to access programming languages and programming knowledge in their own native language.
Think about that. Any person can start programming, whether it's in Hindi,
or whether it's in 4 students, or whether it's in 4 students, or whether it's in 4 students, or whether it's in 4 students.
And with Copilot Workspace, staying in your floor has never been easier.
We are an order of magnitude closer to a world where a person can go from idea to code in an instant.
You start with an issue. It creates a spec based on its deep understanding of your code base.
It creates a plan which you can execute to generate the code across the full repo, that is, multiple files, at every point in this process.
From the issue to spec to plan to code, you are in control, you can edit it.
And that's really what is fundamentally a new way of building software.
And we're looking forward to making it much more broadly available in the coming months.
And today, we are taking one more big leap forward.
You know, we are bridging the broader developer tools and services ecosystem with Copilot for the first time.
We are really thrilled to be announcing GitHub Copilot extensions.
Now you can customize GitHub Copilot with capabilities from third-party services, whether it's Docker, Sentry, and many, many more.
And, of course, we have a new extension for Azure too, GitHub Copilot for Azure.
You can instantly deploy to Azure to get information about your Azure resources just using natural language.
And what Copilot did for coding, we are now doing for infra and ops.
To show you all this in action, here is Neha from our GitHub team.
Neha, take it away.
Neha, take it away.
Neha, take it away.
Neha, take it away.
Neha, take it away.
Thank you, Satya.
GitHub Copilot gives you suggestions in your favorite editor.
Like, yeah, where I'm writing tests.
Cosillas así.
Copilot is great at meeting you where you're at, regardless of the language you're most comfortable
with.
So, let's ask for something simple, like how to write a prime number test in Java, but let's
converse in Spanish using my voice.
En español.
A ver.
¿Cómo verificar si el número dado es un número primo en Java?
Nadie habla bien español.
Look at that.
Gracias, Copilot.
Copilot is great at turning natural language into code and back again.
But what about beyond the code?
With the new GitHub Copilot extensions, you can now bring the context from your connected
systems to you.
So, now I can ask Azure where my app is deployed.
I could ask what my available Azure resources are, or I could diagnose issues with my
environment.
And this isn't just for Azure.
I still can announce that any developer can now create extensions for GitHub Copilot.
Creo.
And that includes any tool in your stack, including your in-house tools.
Keeping you in the flow across your entire day.
Actually, 75% of a developer's day is spent outside of coding, gathering requirements,
writing specifications, and creating plans.
Let's show how GitHub Copilot can help with that.
Live, on stage, for the first time.
So, typically, my day starts by looking at GitHub issues.
Looks like we want to support a rich text input for our product description.
Let's open workspace and get some help with that.
Copilot interprets the intent of the issue to see what's required.
And it then looks across the entire code base, and it proposes what changes should be made.
This specification is fully editable, and the whole process is iterative.
But actually, this looks pretty good.
Copilot can now help us build a plan on how to implement this change.
All right.
That's a great start.
But we must not forget about our documentation.
So, let's edit the plan, and have Copilot update our readme.
And then, we can even get Copilot's help in starting to implement the code for us.
Now, this was just a simple example, but in a large enterprise code base,
there are tens of thousands of files and dozens of stakeholders involved.
And that means meetings.
So many meetings.
Workspace helps you focus on what you need to change.
And, by the way, as a developer, I'm always in control.
I can see exactly what changes are that Copilot is proposing, and I can even get a live preview.
Okay.
Let's test that again.
All right.
Let's test that again.
All right.
This looks great.
So, I can go back, and I can edit my code, EBS Code, or I can submit these changes as a pull request
to share with my team.
GitHub Copilot, Copilot extensions, and Copilot workspace help you stay focused on
solving problems, and keeping you in the flow.
Back to you, Satya.
Thank you so much, Neha.
I mean, I'll tell you, GitHub Copilot and everything that ecosystem is doing is just
bringing back a lot of fun and a lot of joy back to coding.
And really, the thing about staying in that flow is, I think, what we all have dreamt for
and dreamt about and it's coming back.
That brings us to the very top of the stack, Microsoft Copilot.
We built Copilot so that you have the ability to tap into world's knowledge as well as the
knowledge inside of your organization and act on it.
Now, Copilot has had a remarkable impact.
It's democratizing expertise across organizations.
It's having a real cascading effect, right?
In fact, it reminds me, like, of the very beginning of the PC era where work, the work artifact,
and the work flow were all changing.
And it's just, you know, really having broad enterprise business process impact.
It's lowering, I always say this, it's lowering both the floor and raising the ceiling at the same time for
anything any one of us can do.
Since no two business processes are the same, with Copilot Studio, you now can extend Copilot
to be able to customize it for your business processes and workflows.
Today we are introducing Copilot connectors in Copilot Studio, so you can ground Copilot
with data from across the graph, from Power Platform, Fabric, Dataverse, as well as you now have
all the third party connectors for SaaS applications from Adobe, Atlassian, ServiceNow, Snowflake
and many, many more.
Now this makes the process of rounding Copilot in first and third party line of business data,
just a wizard-like experience enabling you to quickly incorporate your own organizational knowledge and data.
We're also extending Copilot beyond a personal assistant to become a team assistant.
I'm thrilled today to announce Team Copilot.
You know, you'll be able to invoke a team Copilot wherever you collaborate in teams.
Right?
It can be in teams, it can be in Loop, it can be in Planner and many, many other places.
I mean, think about it.
Right?
It can be your meeting facilitator when you're in teams, creating agendas, tracking time,
taking notes for you, or a collaborator writing chats, surfacing the most important information,
tracking action items, addressing unresolved issues.
And it can even be your project manager ensuring that every project that you're working on as a team
is going to be working on as a team.
These capabilities will all come to you all and you've been able to do this year.
And we're not stopping there.
With Copilot Studio, anyone can build copilots that have agent capabilities.
And work on your behalf and independently and proactively orchestrate tasks for you.
Now simply provide your Copilot a job description.
Or choose from one of our pre-made templates and equipped with the necessary knowledge and actions.
And Copilot will work in the background and act asynchronously for you.
Right? That's I think one of the key things that's going to really change in the next year where you're going to have
spoilers del próximo año. Interesante esto, eh?
Copilot can even ask for help when it encounters situations that it does not know much about and it can't handle.
And to show you all of this, let's roll the video.
Redefine business processes with Copilot Studio.
Create copilots that act as agents working independently for you.
Simply describe what you want your copilot to do.
Easily configure your copilot with the details and needs.
Like instructions, triggers, knowledge, and actions.
Quickly test your copilot before you go in.
And seamlessly publish across multiple channels.
O sea, es el camino hacia donde se va a mover todo el tema de la IAO.
Al menos a donde van a intentar que se mueva.
Quiere una tarea, conectas a fuentes de datos, conectas a aplicaciones.
Se regenera como un diagrama de los pasos que va a ejecutar.
Y pum.
Copilot can learn from feedback to improve.
And you're always in control.
Put Copilot to work for you.
With Copilot Studio.
Copilot transfiriendo dinero a una wallet de Bitcoin.
De tu cuenta corriente.
De tu cuenta corriente.
You know, all around this stack is perhaps one of the most important things that we at Microsoft are doing,
which is wrapping it with robust security.
You know, security underlies our approach with Copilot, Copilot Plus PCs, Copilot Stack.
We're committed to our secure future initiative.
You can see, you'll see us make rapid progress across each of the six pillars of SFI.
You know, and the core design principles, right?
Which is secure by design, secure by default, and secure operations.
You'll hear throughout this conference, in fact, a lot more in Scott's keynote tomorrow,
how it underlies everything that we build and everything that we do.
So, coming to the close, I want to sort of, you know, there are many announcements that you will hear about at Build.
But I want to go back to, I think the core of what I think, why we chose to be in this industry and why we come to work every day as developers.
Which is the mission ultimately of empowering every person and every organization.
At the end of the day, it's not about innovation that is only useful for a few.
It's about really being able to empower everyone.
And it comes down to you all as developers and builders of this new world.
For us, it's never, never about celebrating tech for tech's sake.
It's about celebrating what we can do with technology to create magical experiences
that make a real difference in our countries, in our companies, in our communities.
Already this new generation of AI is having an incredible impact.
Thanks to all of you, the passion you bring and the hard work you put in.
And I want to leave you with this one unbelievable example of how you're all building a more accessible world,
which means a lot to me, using our platform and tools.
Thank you all so very much. Enjoy the rest of Build.
Audio description is something that enables me to be able to watch a program or a film
and get as much out of it as everybody else who is sighted.
A white car drives down a road, hands on a steering wheel.
I see art as a collective good. I think everyone should be able to have access to art.
Audio description really helps me get the full experience.
A portrait of a group of 17th century civic guardsmen in Amsterdam.
The challenge, though, is that there are limited amounts of audio descriptions being incorporated across media and entertainment.
Tech and AI have the potential to bring the blind and low vision community into the fold.
So at WPP, we really care passionately about opening up access to content to people in a way that they want to consume it.
The tool that I've made is an application that allows you to upload videos from the other areas for vision.
It's brutal. It's indiscutible.
You get your video back.
Transcripciones de video en tiempo real subtitulado.
Descripción visual.
Las bozas que van a llegar ahora.
It's brutal.
Our goal is to be able to offer this product as a service for all of our advertising campaigns.
There are so many artworks in the Rijksmuseum.
There are almost a million.
To describe ourselves, it would have taken hundreds of years.
With AI, we can do this in a matter of hours.
The subject is a male with a reddish beard and mustache.
Visible brush strokes that add texture and mood.
The first time I heard audio descriptions, it just brought me delight.
It was this opportunity of, oh my gosh.
I'm seeing the power of AI.
We were able to do things that only dreamt about until recently.
When we strengthen our access to culture,
we strengthen the culture itself,
connecting our shared humanity.
This is not a matter of time.
This is not a matter of time.
This is not a matter of time.
Why is this a matter of time?
Why is it closed?
Oh, no.
Please welcome.
Executive Vice President, Experiences Plus Devices, Rajesh Jha.
Okay.
We're going to continue with the bill.
Good morning.
Está escrito Copilot cuatro veces en la pantalla, chavales.
Diving into how Copilot is evolving, it was only a year ago at Build that we showed you
the promise of AI, and then we made Copilot for Microsoft 365, generally available in
November.
I want to spend a moment and talk about the journey that we've been on.
Within Microsoft 365, we brought Copilot to the applications that hundreds of millions
of people use every day, giving our users a powerful new way to interact with AI right
in the flow of their work.
Now, we've seen great adoption across a range of customers and industries.
60% of the Fortune 500 now use Copilot, and we've seen accelerated adoption across industries
and geographies with companies like Amgen and BP, Cognizant, Moody's, Nowhere Nordisk, NVIDIA,
Tech Mahindra, and many others purchasing over 10,000 seats each.
We've added over 150 Copilot capabilities since the start of just this year.
¿Qué se dice cuando alguien estornuda?
Copilot.
Copilot.
In addition, we continue to integrate Copilot across more of our productivity apps and services,
from OneNote to Stream to Forms to OneDrive and more.
Now, we also launched a standalone Copilot application.
Whereas Copilot had been embedded in Office Outlook and Teams, the Copilot app now has all
of Microsoft 365 embedded in it.
And what makes the Copilot app unique and uniquely powerful is the grounding it has.
First, our Copilot understands the web.
More than web grounding, the Copilot is crucially grounded in the user's work context.
Who do they work with?
What do they work on?
Their meetings, their conversations, their documents, and that is the Microsoft Graph.
The Microsoft Graph represents the user, their team, their permissions, their organization.
It represents their context.
Now, it's important to know that the Copilot app is no different from any of our other commercial
services in terms of compliance and data handling.
Microsoft has no eyes on access, and your data is not used to train the models.
So, let's dive in and see how all this comes together in the Copilot app.
I have curiosidad for that phrase that I just said.
In the web tab, you can get answers to simple or complex questions grounded in live web data,
all with commercial data protection.
Now, let's toggle over to the work tab.
Here, Copilot is grounded in Microsoft Graph, meaning it has access to your personalized work environment,
the people you interact with, important files, teams meetings, all your communications.
By the way, this is not a demo account.
This is my actual Microsoft account.
Using real data to show you how personal this experience really is.
Of course, you're going to see some redactions.
I do want to keep my job.
Su cuenta, la que está enseñando.
Now, learning to ask the right question is key so you can get the most of the Copilot.
Este hombre va calmado, ¿eh?
Tiene su parsimonia.
Just when you need them.
Let me start by getting caught up with the latest my boss.
You may have heard of him.
Copilot understands organizational structure so he knows Satya is my boss.
It scans the latest emails and chats and files and in moments.
I have a detailed breakdown with updates that need my attention.
Ojo, y vamos a ver qué pone ahí.
Let's try one more.
Here I'm asking Copilot to propose a session title based on this very keynote script.
Which is a document in SharePoint and to create a list of topics on generative AI to discuss in this session.
Copilot's analyze a document from my work environment.
And propose a suitable airline.
But this Copilot is also a brand new web data.
Un poco lo que tiene Google también con Gemini.
Pues, bastante interesante.
Esa Open AI no lo tiene, por ejemplo.
Since its inception, Copilot has been a uniquely personal assistant, as you've just seen.
But we all work in teams, small and large, organizations intimate and global.
And we want to do more, to go beyond, to empower people when they come together.
And like Satya said, today we are announcing Team Copilot.
The expansion of Copilot beyond a personal assistant.
This will enable Copilot to serve and act on behalf of a team, a department, an entire organization.
Not just an individual user.
Copilot will act as a valuable team member, improving collaboration and project management.
Let's take a look.
Team Eletmада.
that becomes valuable member of the team.
It gets added to a meeting like group chat, like any other participant, making it visible for everyone to interact with here,
para todos los que interactúen con.
Aquí Copilot ha sido añadido a una reunión
como facilitador de reunión,
ayudando a la team a enfocarse
en tener un contacto productivo.
¿Lo metes en una reunión?
Copilot es el de la derecha, el señor de azul.
Es un avatar.
Todo esto mola.
Let's switch over to Team's chat.
Here Copilot has been added as a collaborator
to help streamline communications.
As the conversation happens,
Copilot maintains an up-to-date view
of important information for everyone to see.
This includes surfacing an unresolved issue for the team.
Copilot will also help the team work together
to resolve open issues.
Here a team member asks Copilot for recommendations
and grounds the prompt with a relevant document.
Copilot instantly responds with suggestions
for the team to consider.
Once the team has agreed on the solution,
Copilot automatically updates the notes pane,
preguntáis si esto ya se había presentado antes.
Ya en la build del año pasado vimos muchos conceptos de estos,
pero entiendo que no convertidos a productora aún.
Esto va a ser así.
Y ahora ya lo tienen como más integrado.
A team Copilot will be a valuable new member of any team.
And these initial capabilities will be available
to our customers in preview later this year.
Now, in my conversations with customers,
one of the top questions I get is,
how do I translate productivity gains
into transformative business results?
And simply put, the answer is
moving to a reimagining of business processes
using your own Copilots and agents
and extending Microsoft Copilot.
Now, Jeff is going to walk you
through our extensibility platform,
is going to enable you to do just that.
But first, let's take a look
at the Copilot architecture
that really makes all of this possible.
A Copilot, as we spoke, can recall
and reason over up-to-date web knowledge.
It is grounded in search.
All Copilot experiences have this web skill.
Now, in addition to being grounded in the web,
it is also grounded in your data
through the Microsoft Graph.
And when the Copilot is in an application
like Office or Teams or Edge,
it also understands the application context.
So, for example, in PowerPoint,
it is able to drive a slide deck for you
from a document.
The Microsoft Copilot is architected
to compose or inherit capabilities
based on the user context.
So, entiendo lo de los datos
y las aplicaciones,
pero que la búsqueda está en el medio
es un poco raro.
At the data layer, at the experience layer
to further extend and customize the Copilot.
And all of this is enterprise grade
with tools for IT to manage
and personalize for employees.
Our leading ISVs are already working with us,
building solutions that can extend the Microsoft Copilot.
And I would like to share two examples
from ServiceNow and Adobe.
First, ServiceNow.
As you know, ServiceNow helps organizations orchestrate
and automate tasks and processes across their enterprise.
Here, we see ServiceNow's Copilot extension,
Now Assist, responding to user prompts
inside Copilot for Microsoft 365
with the exact same knowledge, functionality,
and user experience as it has today in Teams.
ServiceNow has included several custom zero query default prompts
to help users get started with the most common tasks
without having to know how to craft the right text
to start the conversation.
Over to Adobe.
Who are working on bringing Adobe Experience Cloud workflows
and insights to Microsoft 365 and Copilot.
With Adobe Express Copilot extension,
users stay in the flow of their work.
in a Word document and can start a workflow
in Adobe Express that allows them to create social content,
select and edit images and stage it for publishing.
Está bien.
So, as we close out this first chapter,
I hope you're excited.
Microsoft Copilot is already helping people save time,
be more productive and creative.
Es que no sé, no tengo claro que sea a través de un chat
donde yo haría un diseño.
O sea, yo...
No sé, no lo veo claro.
Me iría a la aplicación directamente.
But first, I want to close my section with a video
showcasing how Lumen is using Copilot for personal productivity
to enhance their sales processes
and connecting Copilot to their systems
using Copilot connectors.
Let's roll the video.
Bueno, pero son plugins.
Lo que nos están enseñando.
Lo que nos están enseñando, todo esto.
It's a company that's going through a tremendous transformation
and evolution to move from telecommunications to technology.
Our goal is to help our 3,000 sellers
and customer success professionals
shift away from transactional selling
and move to being customer obsessed.
Copilot for Microsoft 365 helps us move faster.
It's really hard for a seller
to learn everything about all of their customers.
And one way they can do this is using Copilot for sales.
It takes sellers hours.
They can now do in minutes.
Copilot and Salesforce, for us,
that integration has been vital.
People are coming to the table more prepared.
They have more information.
They're armed.
They're ready to have customer conversations.
Before Copilot, with these third-party data connectors,
the seller would have to go out to all the disparate systems,
Salesforce, Gainsight, ServiceNow.
Copilot connectors help to solve for the fragmentation
that customers can feel,
which drives customer satisfaction.
We have seen an increase in outbound calls by 40%.
Customer outreach typically takes four hours.
Now, with Microsoft Copilot,
with all of these third-party data connectors,
that is a 15-minute journey.
If we can give our sellers back four hours a week,
it's worth $50 million in annual revenue,
and that is math that matters.
When you unlock people, teams, and culture,
you unlock growth.
We're at an intersection.
It's time for disruption in this industry,
and we are here for it.
Well, as Rajesh shared,
you will be able to easily and securely
use your applications and knowledge
para construir co-pilots que ayuden a los empleados y la organización
ser más productiva y crecer su negocio.
Y ahora puedes extender el microsoft co-pilot
con su propio co-pilot, con todos los co-pilot experiencias
y en Microsoft Teams, donde puedes llegar a millones de usuarios
para los escenarios personales y grupo de asistencia de Rajesh.
Y estamos construyendo estos co-pilots incluso más fáciles
desde un poco de clics en SharePoint
a más avanzada en el co-pilot.
...con control de tus modelos, tus datos, tus aplicaciones,
tus acciones, tu experiencia en Visual Studio Code.
Vamos a ver primero lo que significa para los usuarios.
Co-pilot Extensions run everywhere that co-pilot is,
the stand-alone experience across Microsoft Teams
and as we're showing here in the Microsoft 365 app.
On the right, you can easily browse your installed co-pilot extensions,
find new ones or build your own, which I'll show coming up.
And how Microsoft co-pilot works is it reasons over the user's prompt
and maps it to the right extensions.
Or you can explicitly app mention that extension like we're showing here.
You're going to be able to drill into a deeper focus conversation
with the extension, like we're doing in this marketing example,
that has suggested prompts for quick actions
and to just show the users the capability of your co-pilot.
This allows the Microsoft co-pilot to have real-time access
to knowledge and applications in your environment.
Here we're using a suggested prompt to ask about a key feature
of a delivery drone.
The co-pilot extension you build is going to come back
with a visual adaptive card bringing in all the information
to avoid an unnecessary multi-turn conversation
so the user can just focus on getting their work done.
And again, these co-pilot extensions also run in teams,
in one-on-one, in group chats and channels and in meetings,
so you can reach all these users today.
All right.
This is build, so let's get to building
with our first custom co-pilot extension.
You're going to be able to do this with,
from again a few clicks in SharePoint
to advance customization in co-pilot studio to visual studio code.
We'll start in SharePoint,
which is often the authoritative source
of knowledge and content processes
with advanced collaboration, workflow and security,
all of which co-pilot honors
to make sure users only get access to information
that they have permission to.
I'm in the SharePoint site.
I'm going to go ahead and select a few documents,
hit click to co-pilot,
and right there, that looks good.
I'll go ahead and change the name of this.
The delivery drone, that looks fine.
And just like that, I've created my first custom co-pilot
that you can use to extend the Microsoft co-pilot.
Let's go ahead and try this one out.
How much does the delivery drone service cost?
And we can see it comes back
with a flat fee of $5 per order.
All that looks pretty good.
It's secure, grounded.
We're going to go ahead and share that with our team.
And up comes the standard sharing dialogue
to honor the security in your organization.
We're going to copy this link.
Go into Teams, paste it in a chat.
We'll say, try this out.
We will go ahead and paste that.
And just like that, in seconds,
we've created a secure, grounded custom co-pilot
and shared it with our team, Microsoft Teams.
So we're very excited about letting anybody
create these secure, custom co-pilots
and this support will be available in SharePoint this summer.
Sign up today.
OpenAI también tiene mucho en la parte Enterprise
con ciertas empresas, por ejemplo, Moderna.
Han hablado, al menos,
que están trabajando de desplegar GPTs para que trabajen.
Y entiendo que Microsoft también quiere ofrecer eso.
Pero no conocemos exactamente
cómo estas empresas están utilizando los GPTs internamente
de una forma más profesional.
Porque para mí los GPTs a día de hoy
pues tampoco son tan disruptivos por ahora.
So that's all there from SharePoint.
And let me test this out by asking when the launch event is.
And it'll return back that it's on June 20th.
But what I really want
is that co-pilot to do work for me,
not just answer questions.
And so for this, I need to go into co-pilot studio
and start by adding additional data sources
in the Knowledge tab.
Here we can add websites and files
and connect to over a thousand co-pilot connectors.
In this case, our account information
is in two tables in Dataverse.
And so I'll go ahead and select them.
And we've got the information we need.
Next, we move to the Actions tab,
where I need to see if somebody's already registered
for the event.
And if not, send them a personalized invitation.
Our event registration is managed in an external system.
So we've created a custom connector to go get it.
And you can see us configure that.
We can tailor the action and input and outputs
however we need.
And then the second step is to automate
the sending of that personalized invitation.
We've built a custom power automate flow.
So we'll add that as a co-pilot action as well.
And there you go.
Pretty quickly we've got a complex co-pilot extension
with content from SharePoint, data from Dataverse,
two disparate actions.
And we can go ahead and test this in co-pilot studio
to see if the TOSOs actually end the launch event.
It says they're not.
The co-pilot conversation helps you see
why what's happening and tap that all through.
We can ask a follow-up question again
that's routed to Dataverse about who the account manager is.
And we can see that it's Perry Lang.
And then we can ask a co-pilot to go ahead
and send the invitation.
Now again, what's happened here is that
the conversation was entirely generated for me,
identifying and chaining together the key knowledge
from Dataverse and the appropriate actions
we added earlier with generative AI capability.
Last, we're ready to publish this co-pilot extension
back to SharePoint, Microsoft co-pilot, and Teams.
And from Teams, you can search for it
in the unified marketplace.
I can add it to one-on-one or group chats
or meetings, et cetera.
And again, I can see this in the Microsoft co-pilot experience.
That same relic cloud co-pilot and continue the experience there.
So we're very excited.
This is a pattern that people are already doing today.
Let me show you what one of our customers,
Walters Kluwer, who is a leader in information and software
and solutions, is doing to enhance their tax
and accounting professionals workflow with co-pilot.
They're building a co-pilot extension to enable their accountants
to complete each step of their workflows, interacting with their backend system,
just using natural language with no context switching.
Co-pilot takes actions on their behalf, saving time
and cutting the process down from minutes to seconds.
And when it's time to communicate back to their client,
the extension helps close the loop, drafts an email, attach the estimates,
and ensures a seamless end-to-end productive experience.
So we're excited about all of you building these kinds of custom co-pilots
and co-pilot extensions.
Co-pilot Studio is now generally available to build enterprise-based co-pilots.
The new capability to publish co-pilot extensions from your co-pilot
is in private preview.
And as you heard Satya say earlier, co-pilot connectors are now in public preview
and they make it even easier to connect your co-pilots
to your business data, your apps, and workflows.
Okay. Last we want to show you how to build a co-pilot extension
as a professional developer with full control of your models, your data,
your actions, your experience in Visual Studio Code.
So let me go ahead and do that.
So what we've got here is Visual Studio Code with the Teams AI toolkit installed
and we've loaded up a template for building a custom co-pilot
and using it as an extension.
The first thing you see in the code is where I configure the model.
You can use an off-the-shelf model.
Here we're using one from OpenAI.
You can use a refined model or you can build your own, completely tailored to what you need.
You can scroll down the code a little bit further.
This is where RAG integration is.
And you can see in a few lines of code we've integrated our data.
In this case, Azure's vector search capability.
So you can use any other data source.
The Azure AI search capability is a great one to use.
We keep scrolling down.
We see the actions defined and registered.
But here is where we go search for our product inventory.
And you can see we've inserted a breakpoint so that we can follow this along.
Because I want to show how easy it is to do end-to-end development here.
So we'll bring up teams where we've installed that co-pilot extension for the inventory.
Find information for the Chai Tea product.
Hit return.
And we're paused.
Why?
Because we've hit that breakpoint.
And so we can go back into Visual Studio.
You can see sure enough, it's fine.
And if I hover over the parameter,
you can see the product name Chai has been passed to it.
So this is incredibly powerful right from within Visual Studio.
To do end-to-end debugging across the Microsoft App's co-pilot and your co-pilot extension.
If we keep going down before we return that to the user,
I just wanted to show what the UX looks like.
Here is an adaptive card that comes back.
It's defined in JSON, but you can also see the user experience for that.
That all looks good.
Let's go ahead and resume execution from the debugger.
You can see that we've now returned.
That's cool to the teams.
The result.
And we've got an attractive, adaptive card that gives the user the information they need.
It does end-off with the co-pilot.
So right in line, the user can complete the work.
So pretty exciting.
Again, full flexibility within the Teams AI library and Visual Studio Code.
We're excited, just like with Co-pilot Studio,
this is something you can do today,
you're targeting hundreds of millions of Teams users,
and very soon the ability to turn your custom co-pilot
into an extension will be available.
And this is not new.
This is something that hundreds of ISVs are already doing today
across all sorts of experiences in Microsoft 365, Teams,
and now co-pilot.
Just two examples are leading software organizations,
Esri and Thomson Reuters.
Esri is the market leader in geographic information systems.
They're building a co-pilot extension
that adds spatial analytical capabilities
directly into the Teams meetings.
So that users can ask Co-pilot from Microsoft 365
a question about map data.
And Co-pilot will seamlessly hand off
to a rich interactive experience with visualization
in Esri's custom co-pilot with all the associated context.
Casos de uso, ¿no?
De cómo ciertas empresas utilizan estos co-pilots,
estas extensiones.
Hay un punto en el que uno esperaría que estas extensiones
las programara la propia IA.
que tú te dirás que quieras una extensión de este tipo
y que te la hagan.
Un marketplace.
Fijaos, aquí adelantan OpenAI por la derecha.
Y una de las razones por la derecha.
es que la IT tiene confianza en las capacidades de governance
para sus aplicaciones y extensiones que construyen
y las aplicaciones de customes que construyen
y las aplicaciones de customes que construyen
y las aplicaciones de customes que construyen
y las aplicaciones de customes que construyen
y las programar a la propia IA.
Que tú te dirás que quieras una extensión de este tipo
y que te la hagan.
Un marketplace.
Fijaos, aquí adelantan OpenAI por la derecha.
Fijaos, aquí adelantan OpenAI por la derecha.
No han hablado de agentes todavía, ¿no?
Conclancia de approved단os.
Y si?
Qué es algo más importante de aplicación
de creación donde puedes usar de los performance
más productos más goedologia en cada spectrum auctione
de la nivel de amend eyeliner.
Pero debajo.
There es una cosa más.
Este es un aspecto más.
Estamos muy emocionados para que lbas aårt
para crear losani odiantes con el sugerente
trabajar con AI a salar mejor puedan escribir
Así que tenemos un conjunto de anuncios sobre eso esta semana en Build, también.
Primero, el código de fuentes dentro de Teams con formato de formato de syntax.
Y, vean esto...
Nos preguntábamos qué le daría el aplauso, y eso fue mi voto.
Y, pero, espera, deberías dar el aplauso.
Con Microsoft Loop, co-editando ese código de fuentes dentro de Teams.
Pero, sí, pero hay más.
Los desarrolladores nos han preguntado por un tiempo
por más densidad de información en Teams,
así que pueden crear y cambiar a un modo compacto
para ver mucho más contenido en la pantalla.
Y pueden ser mucho más productivos
con cosas como cortes de cortes de clave
y nuevos comandos de Slash para Teams.
Y, por supuesto, los desarrolladores
siempre van a poder resolver problemas en el chat.
Y el aplauso salga porque pueden ver snippets de código en el Teams.
Madre mía.
Y, por supuesto,
en el chat,
puedes traer una llamada llena entre los miembros del equipo
y resolver el problema en segundos.
Y, por último,
los equipos de desarrolladores
love to have fun
to break from the stress
so that you can use
custom emojis reactions
now in Microsoft Teams.
¡Vamos!
¡Vamos!
¡Dos mil doce!
¡Allí estamos!
Sí, ahí estamos.
¡Vamos!
¡Vamos!
¡Vamos!
¡Vamos!
¡Vamos!
¡Vamos!
¡Vamos!
¡Vamos!
Y, por supuesto!
¡Vamos!
¡Vamos!
¡Vamos!
¡Vamos!
¡Vamos!
We are very excited
to make Teams a great place for developers to work together to build this next-generation
AI.
The next chapter is how Windows is the best platform for building that next-generation AI.
Y para mostrarlo, estoy emocionado a invitar Pavan a la stage.
Pavan.
Ok, Windows.
Thank you, Jeff.
Good morning.
It is great to be here at Build.
This is one of my favorite times of the year
connecting with fellow product makers
about the world's canvas for innovation, Windows.
I'm excited about the co-pilot extensibility
that Margex and Jeff just shared.
It really shines on Windows,
the platform customers choose
from Microsoft 365 and co-pilot.
Over the last year, we've learned so much
about how co-pilot can best serve you.
¿Esto qué es? ¿Microsoft co-pilot?
Han hablado de esto.
We're working hard to make it even more valuable
with the vision for co-pilot meeting you right in your workflow.
Imagine creating a presentation from a document in File Explorer
or helping customers troubleshoot their PCs
using quick actions and natural language right in settings.
We're focused on making co-pilot even more contextual
and useful across Windows.
We took a big step towards that goal yesterday
with the announcement of co-pilot plus PCs,
the fastest and most intelligent PCs ever built.
Let's take a look.
Vale, esto es lo de los PCs que hemos visto al principio
del evento de ayer.
Yo creo que deberían de hacer como un re-naming,
un re-branding de Windows a co-pilot.
Y ya queda todo súper alineado
porque podrías usar co-pilot en co-pilot
con el co-pilot PC
y tomar toda una tasa de co-pilot
para poder estar un poco más co-pilot
en el co-pilot copilot.
Fijaos que dentro de ser de nuevo una keynote larga y tal,
el que hayan puesto el foco en el co-pilot estudio anterior
y hayan enseñado un poco lo de las automatizaciones y tal,
el haber mantenido el foco un poco en alguna de las cosas
me gusta.
Y creo que es el fallo que vimos en la presentación de Google
la semana pasada
y también un poco en esta ha habido...
Hemos pasado por muchos temas,
pero que pongan el foco en algo
y separen y expliquen y tal,
creo que ayuda bastante a que sea más interesante.
El foco en la presentación de Google
es woven into every layer of these devices,
from the silicon to the operating system,
with the most powerful PC NPUs
capable of delivering
over 40 trillion operations per second.
This new class of PCs
is up to 20 times as powerful
and 100 times as efficient
for running AI workloads
compared to traditional PCs
from just a few years ago.
Built together with our silicon partners,
AMD, Intel, and Qualcomm,
and our OEM partners,
these PCs will be available June 18,
starting with Qualcomm's
Snapdragon X series of chips.
Copile Plus PCs are redefining
what you can do on a PC
and setting the direction
for the next decade of Windows.
To put this new wave of AI innovation
in your hands,
we're excited that Qualcomm
has announced
Snapdragon Dev Kit for Windows.
It is designed to be
your everyday dev box for AI
with the power and flexibility
you need.
As we define this new path
for Windows in the era of AI,
one thing that will never change
is our commitment to openness.
We recognize that the real value
of Windows comes from the energy
and the innovation of the ecosystem.
It comes from all of you.
So as we enter this new era,
let's talk about how we're going
to serve over a billion Windows
customers together.
As Satya said earlier,
building a powerful AI platform
takes more than a chip or a model.
It takes reimagining the entire system
from top to bottom.
The new Windows Co-Pilot runtime
is the system
that extends the Co-Pilot stack
to Windows.
The Windows Co-Pilot runtime
is a new integral part
of Windows 11
and has everything you need
to build great AI experiences.
Whether you're just getting started
or already have models of your own.
It includes the Windows Co-Pilot library,
a set of APIs that are powered
by on-device models
that ship with Windows
and includes AI frameworks
and toolchains
to help you
to build a lot of devices
and it's built
on the foundation
powerful client silicon
including the NPUs
in the Co-Pilot plus devices.
This is the work
that we have to do.
Let's take a look
at how the Windows Co-Pilot runtime
enables an entirely new class
of experiences.
OS experiences like Recall
that help users find anything
they've seen on their PC.
Inbox app experiences
in photos and paint.
which let you bring
your ideas to life
using real time
image generation
and app experiences
like CapCut
and Cefable
and DaVinci Resolve.
Some of our first partners
using the new NPU
and helping us build
the Windows Co-Pilot runtime.
Looking ahead,
the Xbox team
has a vision
for using the Windows Co-Pilot runtime
to empower players
and game developers.
Let's take a look.
Okay.
Co-Pilot y Xbox...
X-Box...
Joder.
Xbox.
¿Qué me pasa?
X-Box.
De dicha.
Ah, qué guay.
Ya te pueden enseñar
a usar el solitario.
Qué bien.
Claro, si tienes a Co-Pilot,
que es tu copiloto,
ya el solitario
ya no es tan solitario.
Porque ya no estás solo.
Sería un co-solitario.
No es lo que esperaría, la verdad.
No es lo que esperaría, la verdad.
Soja, любовe a spinos.
Ahora,
la escena a la Angela K challenges ...
La Phka aparecerá.
Es decir,
vamos a tener la ciencia pronta!
Esto va a estar guapo.
Lo que he dicho al principio, olvidar el tema de recordar la pantalla, sino que el ordenador esté procesando lo que pasa en tu pantalla. Eso es interesante para el futuro que se viene.
Esto es un ejemplo.
con un gran corpus de datos de datos, con el Vector Embedding de la API,
puedes crear vector stores para los datos de los datos.
Eso es poderoso cuando combinado con el RAG API
para hacer natural search en tus aplicaciones para tus usuarios.
Por supuesto, eso es solo un ejemplo.
Los APIs de la Copilot de la API cover el espectro,
de los apios de los apios, de los pipelines,
de los modelos, como el recientemente realizado PHY-3,
el único mejor SLM en el mundo.
PHY-3 mini hace un mejor trabajo
de los modelos twice de su tamaño en key benchmarks.
Hoy, estamos contentos de anunciar PHY-SILICA,
de los modelos de PHY-SILICA,
específicamente diseñados para los NPUs
y co-pilot plus PCs.
Es oferso en la línea de infección,
y de la primera opción responsable.
Windows es la primera plataforma
para tener un SLM,
custom built for the AQ,
shipping in your box.
Ahora, vamos a ver
lo que puedes hacer
para traer tu propio modelos de on-device
a Windows,
usando frameworks y toolchains.
Empecemos con DirectML,
el framework de aprendizaje más bajo
en Windows,
similar a DirectX para gráficos.
Whether it's your own open source models
or an open source model from Hugging Face,
DirectML helps you scale
the breadth of your efforts
across the Windows ecosystem
by giving you to the metal access
to GPUs and NPUs.
We also know that a lot of you
do your development on PyTorch on Windows,
and we're thrilled to announce
that Windows will natively support PyTorch
through DirectML.
That's right, pretty exciting.
Thousands of Hugging Face models
just work.
Native PyTorch support, of course,
means that Hugging Face models
will just work on Windows,
but we're collaborating with NVIDIA
to bring these workflows
to over 100 million RTX AI GPUs
in the Windows ecosystem.
Now that's incredible.
You can download the PyTorch
and DirectML developer preview today.
We're also going to extend DirectML
to our web developers
by introducing WebNN on Windows.
WebNN is a web-native
machine learning framework.
Microsoft has been working with Intel
and other partners to unlock the access
to local ML accelerators
so you can build performant AI experiences
in your web apps.
Behind me,
you see ClipChamp's auto-compose feature,
achieving faster video composition experiences
and cloud savings
by leveraging the NPU
through ORT Web and WebNN.
I'm excited to announce
that WebNN is available
in developer preview today.
.
Okay.
So that's a glimpse
of the Windows Copilot runtime,
how it lays a foundation for innovation,
giving you the largest catalog of models
on the largest ecosystem of devices,
making Windows the most open platform for AI.
I heard you there.
That's fantastic.
.
As Windows transforms
for the era of AI,
we're continuing to reach the expanse
of the platform,
including all the AI experiences you create
with the Windows Copilot runtime.
We're delivering Windows from the cloud
with Windows 365,
so your apps can reach any device anywhere.
And we're introducing Windows experiences
to new form factors beyond the PC.
For example,
we're deepening our partnership with Meta
to make Windows a first-class experience
on Quest devices.
And Windows can take advantage
of Quest's unique capabilities
to extend Windows apps into 3D space.
We call these volumetric apps.
Let's take a look.
Wow.
Aplicaciones volumétricas
para las Quest.
Microsoft is partnering with Meta
to bring Windows 365
and local PC connectivity to Quest.
And they're able to develop
developers to easily extend
their Windows apps into the 3D space.
The PC has been working
with this platform,
bringing Creo into mixed reality
in under a day.
This extension allows users
to enhance spatial understanding,
without leaving the app
that powers their work.
Sign up for the developer preview today.
Muy por encima, pero...
As developers,
you'll have access...
That's great to hear.
As developers,
you'll have access to volumetric API.
And this is just one of many ways
to broaden your reach
through the Windows ecosystem.
For decades,
Windows has been the stage
for the world's innovation.
With Copilot plus PCs,
Windows Copilot runtime
and Windows 365,
we're going to unlock
a new era of innovation together.
Thank you.
Back to you, Rajesh.
Muy interesante,
porque si tú de repente tienes
las Quest conectadas al ordenador
y una aplicación como GPT-4O
ejecutándose en local
sobre lo que tú estás viendo
con las gafas,
es muy interesante.
From the expansion of Copilot
beyond a personal assistant
to acting as a valuable team member
to how you as developers
can extend Microsoft Copilot
with your own Copilots and agents
in just a few clicks in SharePoint
to more advanced customization
in Copilot Studio.
You can use VS Code,
full control of your models,
your data and actions.
And, of course,
a phenomenal opportunity for developers
with over a billion Windows customers.
So I'm going to close
with highlighting another customer,
Amgen,
a pioneering biotechnology company
that harnesses the power of biology
and technology
to fight the world's toughest diseases.
We are going to see how they harness
Microsoft Copilot
and their mission.
Kevin Scott,
our CTO,
along with some special guests,
will round out the day one of keynotes.
But before the video,
let me just finish by simply saying thank you.
Thank you for spending your time
with us here at Build.
It means a great deal to all of us
and thank you for the trust you place
working together,
building the future with us.
Creo que van a cerrar con este video.
Thank you.
No sé si se despide él
o despide en el evento ya con este video.
We are a once in a generation moment
where we have the opportunity
to harness the power of tech and biotech
to revolutionize drug discovery
and development,
manufacturing and commercialization.
We started with 300 licenses
and now 20,000 of our employees
have access to Copilot.
In manufacturing,
you're moving at a very fast pace.
You have a lot of data coming at you.
Copilot is one tool that allows us
to synthesize that information
and ensures that we can focus
on what's most important,
the production of that drug.
We've taken advantage of extensibility options
using graph connectors.
Identifying opportunities to use Copilot
for daily efficiency gains
and multiplying those by a hundred
or a thousand times in the life of a molecule.
From idea to achieving marketing authorization
starts to give you a sense
for what's possible.
Yo creo que cerrará Satya, no tiene sentido acabar con este video.
We have capabilities in our hands with these new tools
in the early days of this new platform
to absolutely do amazing things
where literally the challenge for you all
is to go do some legendary shit
that someone will be in awe of you for one day.
When people are looking for food resources,
it can be a humbling experience.
Childhood hunger and food insecurity
is a problem that we can solve
and we are using AI to connect those in need.
We know through research that people prefer
sometimes to talk to an AI agent,
talk to a chatbot
because they're not talking to a human.
Technology can really help people get what they need
with multiple resources in one place
and it's making it mobile friendly,
which is really important.
Most people access this information via smartphone.
What we're doing at No Kid Hungry
is ensuring that we end childhood hunger.
Around 2019, Brian started having some issues with his ankle.
I was able to get him a appointment with a doctor
and he was like, I need to tell you, but this is ALS.
How was I living in the world
where my kids would be able to hear me say
I love you?
The biggest opportunities when it comes to
launching technology is the gap that I can fill.
AI has the ability to increase communication,
increase independence.
It's the next wave for accessibility.
What Microsoft is doing,
their voice making technology
really gives someone back that intonation
and a little bit more of their personality.
Today, we're going to play like a newer version of your voice.
So you just choose any phrase
and then I'll play it on here as well.
Hey, beautiful people.
I am so happy to be with all of you.
All right, I'm going to type it out.
Hey, beautiful people.
I'm so happy to be with all of you.
Dang.
I think this sounds just like me
and I am overwhelmed with joy
that my family gets to experience
a part of me.
In any way, I feel like I am back.
We'll talk a little bit about it
and compare with Google's last week.
So, we also have three events aligned
in these two weeks.
I think we can see really interesting
the comparison.
Please welcome
Chief Technology Officer
and Executive Vice President
of AI, Kevin Scott.
Vale. Ojo, eh. El CTO. Kevin Scott.
Aquí podría salirse a malmantar.
A lot of people are different than they are.
But as a person who has dedicated their entire career
to building systems and infrastructure,
tools and frameworks for other developers to use
to go make the things that they need to make
for whatever reason that they need to make them,
it's just extraordinarily gratifying to see
the really, truly consequential things
that you all are choosing to do with these new AI tools
that we had a role in bringing into existence.
So I just want to thank all of you so much
for all of the great shit that you have made over the past year.
So, you know, I want to give a call out to our friends
at Share our strength.
But it's not just the really wonderful work
that we just showed in the video that folks are doing,
like as Satya mentioned earlier,
we have over 50,000 customers using Azure AI tools
and the Azure AI platform.
And like this range of customers spans everything imaginable
from all different industry categories,
all different scales of business,
from small startups to Fortune 100 companies
and all different stages of exploration of how to use AI
to do transformative things from doing explorations
like trying to find product market fit
to scaling things where you found the product market fit
to folks who are just trying to figure out
how to optimize and enhance the things
that they've been doing for a very long while.
So I just wanted to highlight another couple of things
that I thought had been really noteworthy collaborations
that my team has had with some of you all in the audience.
So we've been doing some really cool work with Etsy
using generative AI and the Azure AI platform
to build features like their new gift mode.
So we have the CTO of Etsy here in the audience with us today.
Hello.
And you all should go check out Gift Mode.
Like it is like a really innovative way
to help you figure out how to buy things
for people who are difficult to buy for,
which I'm guessing like almost all of us in the room are.
It's like super stressful trying to figure out
what the right gift is for the right person
that you care about in your life.
And this is just one of many interesting things
that Etsy is doing with generative AI
to really enhance the experience of the Etsy product
and to bring delight to their customers.
Another thing that I'm super excited about
is the work that Cognition has been doing.
So Scott and the Cognition team are also here with us today.
We have just recently announced a partnership
between Microsoft and Cognition.
Their product Devin is like an absolutely amazing tool.
Claro, no había caído Cognition lo de Devin.
Imagine for yourself like some of the most tedious things
that you do as an engineer or software developer.
Curioso, eh?
Devin is a tool designed to help you with those tasks.
Like I can't even tell you all the number of times
as an engineering leader I've either,
or as an engineer, that I've had to write code
or lead teams writing code for doing things
like replatforming and application.
And like replatforming is like one of the laws of physics
of like how we build software systems.
Like it's a thing that must be done,
but like, you know, it's rare that an engineer
really enjoys doing that.
And so like the incredible work that Devin is doing
on top of these incredibly powerful tools
is just really extraordinary.
And we're super excited to be partnering with them
and to bring all of the power of what they're doing
to Azure and having their systems and infrastructure
run on Azure.
So, you know, I really want to talk with you all today
about just a couple of simple things.
What's driving all of this progress?
Like why is all of this happening right now?
And so part of it is like we're riding
an extraordinary platform wave.
Like something is fundamentally changing
in the universe of technology,
much in the same way that it changed
when we were going through the PC revolution,
where Moore's law was driving an incredible increase
in the power and lowering of the cost of personal computing,
which led to it becoming ubiquitous
and, you know, something that we now all get to take
for granted.
A similar thing happened with the internet revolution,
where networking technology connected all of this compute
together and allowed us to do things
that previously were unimaginable.
And we're going through one of those major technological changes
right now, being partly driven by a set of things
that we'll talk about in a few minutes,
just the incredible scaling of the capability of AI systems
as you apply more compute and more data to training them.
But like before we get to that expansion of the frontier
of the increase in those capabilities,
like a super important part of the emergence
of a new powerful platform is sort of completing the stack.
So it's actually hard work,
even when you have a piece of technology
that is improving at an exponential rate,
to figure out how to do all of the things
that have to be done in order to deploy it in real applications
so that you can go out and deliver value
to real customers who care about what it is that you're doing.
And we've done a huge amount of work over the past year
on the Copilot stack.
It is both optimizing a bunch of systems,
so things are getting cheaper and more capable,
and it's also building that whole cloud of capabilities
and systems, services, and tools around the core AI platforms,
the big models that you all need,
and the choices that you all want
so that you can build the things that matter to you
under the constraints that you're operating under.
So, one of the reasons that we have been able to do this
is no other company has deployed more generative AI applications
over the past year than Microsoft has.
And so you have probably heard us over the past year talking about
11 Copilots.
Like this new software pattern that we originated with GitHub Copilot
where you pair powerful generative AI
with this user interface paradigm
where you're using the AI to help assist users with tasks.
And so you can apply this to everything,
and I know many of you in the audience
are building your own copilots.
So Microsoft itself is building copilots for service,
for sales, a copilot in Bing, a copilot in Edge,
a copilot in Windows.
And the reason that we've been able to do all of this work
is because we have the copilot stack
that we built for ourselves to help us have real agility
in getting these products built quickly,
to have them built efficiently where they're price and cost optimized,
and to build them in a way where they're safe and secure.
One of the things that you have heard from Rajesh
and that you'll be hearing a lot more of at Build
is like part of what the copilot stack is allowing us to do
is to unify the experience across all of these copilots
into one logical Microsoft copilot
where you don't have to really pay attention
to which Microsoft product or service you're in.
Like the copilot just understands all of your context
and delivers all of the capability of the model
in the context of your data and your tasks to you when you need it.
So, the other thing that is really driving progress
is not just this sort of completion of the copilot stack,
this sort of progress that we're making
and filling out that toolkit for you all
so that it is easier for you to build software.
But we are riding like a fundamental wave
in the development of this AI platform.
where if you just sort of look at compute over time,
like how much GPU cycles or accelerator cycles
that we're using to train the very biggest models in the world,
since about 2012, like that rate of increasing compute
when applied to training has been increasing exponentially.
And we are nowhere near the point of diminishing marginal returns
on how powerful we can make AI models
as we increase the scale of compute.
So, we're sort of doing two things at once at Microsoft.
So, we are optimizing the current frontier
and like building that toolkit to help you all leverage it
while at the same time investing at a pretty incredible rate
in pushing the frontier forward.
And so, one of the super interesting things that has just happened
as we're sort of pushing the frontier forward
and like really, really driving forward on efficiency.
So, you have to check it out, eh?
A lot of our partners OpenAI launched last week
in the form of GPT-40.
As I mentioned earlier, GPT-40 is like a really stunning achievement.
A multimodal model that understands a bunch of different input types
from video to text to speech.
They can respond in a bunch of rich ways
from text to speech and eventually video.
It can respond to you or to the interactions for users
in their applications in real time.
And in the case of the chat GPT demos that folks have seen,
you can even interrupt the model
so that you can have really fluid interactions with these systems.
And then, an enormous amount of work has gone into GPT-40,
both the model itself as well as the supporting infrastructure around it,
to ensure that it's safe by design.
So, I wanted to also just remind folks, like this efficiency point is real.
So, while we're off, you know, building bigger supercomputers
to get the next big models out and to deliver more and more capabilities
of the GPT-40.
So, I'm trying to remind you of making the current generation of models
much, much more efficient.
So, between the launch of GPT-4, which is not quite a year and a half ago now,
it's 12 times cheaper to make a call to GPT-40
than the original chat, the GPT-4 model.
And it's also six times faster in terms of like time to first token response.
It's just really extraordinary how much progress we're making
because of like the full set of optimizations that we're doing
from the silicon that we're building, networks, data center optimization,
as well as like an incredible amount of software work that we're doing
on top of all of this hardware and infrastructure
to really tune the performance of these systems.
And the great thing is, again, there's no point of diminishing marginal return here.
And one of the messages that I want to land with you all today
is that you can count on things getting more robust
and cheaper at a pretty aggressive clip over time.
So, it's like a really important thing to internalize,
like something that we challenge ourselves on it.
is like aim for things that are really truly ambitious,
because all of this optimization work is going to accrue to make things
like really ubiquitous in terms of like how you go to play them.
I just want to show a little quick demo video here.
So let's roll the video, show you.
Hey, Microsoft Build Conference is tomorrow,
and I have not been sleeping and I am making stupid mistakes.
Can you help me take a look at this code?
I'd be happy to help.
Could you tell me a bit more about the issue you're facing with the code?
For example, are you getting an error message
or is the code not doing what you expected to do?
Let me just show this to you.
It looks like you're working on a Python function
to summarize main points from a keynote.
What I'm trying to do is go through a list of keynoters
and summarize all of their work.
And my problem is that I wanted to just add Kevin's name
to the end of the keynote list.
And it added every character of his name instead of just his whole name.
What did I do wrong here?
Change keynote list dot extend to keynote list dot append last keynote.
This will add Kevin as a single element to your list.
Just like Satya and Rajesh.
Oh, that sounds good. Let me try that.
Sounds good. Give it a go and let me know how it works.
Okay. I replaced extend with a pench. Does that look right?
Yes, that looks better.
All right. I made that change. Let me run it and try.
Sounds good. Let's see how it goes.
It worked. Satya, Rajesh and Kevin all came out.
Awesome. I'm glad it worked out.
Sometimes the smallest things can trip us up.
Especially when we're sleep deprived.
Thank you.
Venga, venga, venga, venga.
Que esto ya es antiguo. Esto de la semana pasada.
It really is extraordinary. So like I should say, by the way,
that Jennifer would never make that actual mistake
in writing Python application, but Kevin might.
So like this is all giving me hope that a rusty old CTO
might one day be able to program again if the need arose.
But like I do want to make sure that we're paying attention
to like just how much has changed over the past year.
I mean like what you just saw would have been absolutely
inconceivable to think about actually working.
And like this was, you know, not a tortured demo.
Like, you know, we were sort of looking for something to show
off GPT-40 to you all last night.
And Jennifer showed me this.
And then she just recorded this demo.
Like this is just crazy that it works this well.
Mola porque el tío lo vive, se le nota, que le gusta.
Me cae bien.
Ay, por favor.
Lo saben, saben que estamos esperando.
And we've been working for a while on this series of models
called PHY that are small language models.
So Satya chatted a little bit about this in his keynote earlier.
And the way that we think about PHY is just imagine an efficient frontier.
And so usually when you're building these models,
you're trading a couple of things off.
So you can trade size off, which is related to performance and cost
and a whole bunch of other things versus quality.
So the smaller the model is, like the cheaper it is to do inference
and like the less compute that you need to actually run the model.
So small models are more amenable to running on devices.
But it usually means that you have to take a hit on quality.
And like what we're discovering, in particular over the past year,
is that there's this notion of an efficient frontier.
So we don't even show the GPT-40 point on this slide.
It would be like way, way, way off to the right,
just in terms of the size.
So like if you want extreme levels of quality and performance,
like a frontier model is your friend.
But in some cases, like you may want to choose one of these other models,
somewhere else on this efficient frontier,
where the trade-off that you're making between
a cost to serve or latency or locality
is acceptable given the quality that you can get.
And the very interesting thing that's been happening over the past year
is the quality that you're able to achieve in these small models
is getting pretty high.
So I just wanted to show this as an illustration.
So like, you know, remember back ancient history
to the launch of ChatGPT in November of 2022.
So ChatGPT launched on top of GPT-3-5.
And so like everybody was just absolutely gobsmacked
at what was possible with GPT-3-5.
I mean, just this sort of a stunning revolutionary thing that happened.
We fast forward a few months to March 2023
and ChatGPT gets an upgrade to GPT-4,
which is even more extraordinary what it's able to do.
Like you're able to ask extremely complicated questions
these things and get very rich, interesting, compelling...
Complétame la regresión lineal, por favor.
So fast forward to today, and like you can sort of see that
a version of Fi-3, optimized or run on a mobile phone,
can respond to a prompt, just like ChatGPT could,
just a year or so ago, with responses that are sort of equivalent.
And like this is not arguing that, you know, Fi-3 that's running on this device
is just as powerful as GPT-4, it is not.
But the way that you all should be thinking about it is,
in many cases, these models can be appropriate to use
for building your applications when you have a particular set of constraints
that you're trying to optimize towards.
And so like all of this is kind of, you know, abstract in a way.
And so I wanted to really motivate why this matters with the following example.
So Satya mentioned earlier the partnership that Microsoft has formed
with Khan Academy.
And Khan Academy's mission is really interesting and important.
Like they are trying to ensure that every learner on the planet,
no matter where they are, has access to high-quality,
individualized instruction.
And so one of the things that we are exploring together with Khan Academy
is the possibility of achieving that goal of ubiquity
of these personalized learning agents by using something like Fi-3,
where you can imagine training a Fi-3 model
that's very good at something like math instruction.
So this is an actual interaction with Fi-3 medium
that has been fine-tuned to work particularly well for math tutoring.
And the challenge with doing something like this
is that you have to not just have the model give the student an answer,
but like you want it to lead them towards discovering the answer themselves.
So like a tutor is very different from an answer agent.
And so like it's just exciting to think about how many tools
that organizations like Khan Academy have to solve like these really,
really important missions that they have in the world.
And so with that I'd love to bring Sal Khan from Khan Academy.
No me asustes. No me asustes. No me asustes.
No me cambies a Sam Allman por Sal Khan.
Hey Sal. Thank you so much for being here with us today.
So, you know, we have been chatting about your mission, I think, for quite a while.
And one of the interesting things that happened when ChatGPT burst onto the scene a few years ago
is that there was this reaction from a bunch of educators,
and I think it was actually a reasonable reaction where, like,
okay, we don't understand this, like, we don't want our students using it,
they're gonna, like, do things that we would prefer that they didn't do it.
You, on the other hand, looked at this and said,
this is amazing, and leaned all the way in.
Can you explain a little bit about what drove your first reaction to this new technology?
Yeah, you know, some of you all know how Khan Academy got started.
If you go almost 20 years ago, it started with me tutoring a cousin.
I was a hedge fund analyst at the time.
I tutor one cousin, word spreads in my family, free tutoring is going on.
Before I know it, I'm tutoring 10, 15 cousins.
And I start writing tools for them, software, I start making videos.
That's a lot of people know about Khan Academy.
And if you think about that journey from then till now,
even right before we started really working on generative AI,
everything we've been doing is how could you scale that type of personalization
that I was originally doing with my cousin Nadia.
And we were approximating it with software and videos and teacher tools.
But when we saw, but to some degree there was going to be,
we were going to asymptote on how far you could get with pre-generative AI tools.
And then when we saw it, it was really GPT-4 that opened our mind.
You know, Greg and Sam from OpenAI showed it to us.
And the summer 2022.
We realized that there are things that have to be worked out,
but it could get that much closer to emulating what a real tutor.
And it was obvious.
Oh, it could also be a cheating tool and get to worry about the same team and privacy,
especially with under 18 users.
But I told the team, let's turn those into features.
Let's put the guardrails on it.
Because this could get us that much closer to our mission,
which is a great education world.
Yeah, I think one of the other things that you all have done,
and, like, this is a really important thing to internalize about these models and systems,
is, like, the model isn't a product and, like, the systems aren't silver bullets.
Like, you still actually have to understand, like, who your customer is,
like, what problem you're trying to solve,
and, like, how to go deal with a whole bunch of gnarly things
on top of the, you know, this incredibly interesting and powerful tool
so you can do something useful.
Like, you want to talk a little bit about what you had to do there?
Yeah, and I have to admit, and maybe everyone,
a lot of people in this room or in the world right now are experiencing this,
every now and then you see some of these demos and you're, like,
does my application even have relevance anymore?
This thing's going to be able to do everything.
But then when you sit down and you really think about how a school system,
a teacher, a student's going to use it, and you're going to,
what are the guardrails? What are the privacy?
How do you make sure that it really does the tutoring interactions appropriately?
It's aligned to standards.
You realize that there's a lot to do at the application layer.
Now, I think we're all discovering together this new world of developing applications
in terms of large language models. It's not deterministic in a traditional way.
You have to have evals. You have to constantly test.
But we're realizing that there's just so much to do that really is.
It's a very exciting time.
Yeah, I mean, one of the things that I'm actually excited about is, like,
this mission that you all have for Ubiquity and, like, the partnership that we're doing
with you all is going to enable you to get the hand, or, like, get every teacher
in the United States hands on Comigo and your tools.
And, you know, like, just a personal anecdote for me is, like, my daughter is in the ninth grade.
She's taken biochemistry and just in love with science in general.
And she, on her own, without any prompting from dad, figured out how to use the free version of ChatGPT
to take a bunch of biochem papers that were, like, way, way, way more complicated
than a 15-year-old by rights has to understand.
Dump them into ChatGPT and then just ask a million questions about it.
And, like, her learning acceleration because she's figured out how to use this tool is extraordinary.
And, like, I just want every kid in the world to, like, have the same experience that my daughter has.
Absolutely.
And, you know, what we realize at Khan Academy, there is a subset of students that if you give them the tool,
and it sounds like you're lucky enough to have a daughter like that, they will run with it.
But what you really need in most cases is you need caring adults, primarily teachers in the room.
Yeah.
Motivating students, driving that usage.
And so what we're really excited about this partnership, and this is a big deal.
I want to make sure, you know, we are using state-of-the-art models that use real compute.
It has real cost associated with it.
When we launched Khan Migo, which is still out there, and it's a tutor for students.
It's a teaching assistant for teachers.
But what we're launching today as part of this partnership is these state-of-the-art teacher tools,
we're going to be able to give free to every teacher in the United States
so that they can get productivity and improve.
Yeah, big, big, big.
Dice la palabra gratis y me encanta la gente.
Estás pensando lo sorprendente que es, si lo pensáis, que hace cuatro años estábamos con GPT-2.
Cuatro años y medio.
Vamos a poner cinco.
Cinco años con GPT-2.
Todo lo que ha crecido la ola hasta el día de hoy, ¿no?
GPT en el escenario del build.
Así que, por supuesto, todo lo que ha crecido la ola hasta el día de hoy, ¿no?
Hasta el día de hoy, ¿no?
And, like, having written a book myself, like, it's, like, a lot, a lot of work.
So, why, and, like, everybody should, like, you know, pay attention to these fantastic endorsements
the book has gotten and, like, go read a copy of it.
It's, like, a fascinating work, not just about education and your mission, but, like,
I think also it has a bunch of really interesting lessons about how you can ambitiously use AI
to solve hard problems.
But why did you choose to write a book now?
You know, this is the second book that I wrote.
The first book I wrote back in 2011.
And I remember when, uh, the publisher had reached out to me then.
I was like, why would I write a book?
I could just put it all on YouTube and, you know, share it with the world.
But there's something about writing a book that lets you frame the problem.
And I felt that ten years ago, when, or, twelve years ago, when Khan Academy first came on the scene.
And I think this moment, we all feel even a little bit more overwhelmed.
I mean, it's been mentioned these, this exponential growth.
We're all feeling like things are changing every week.
And I wanted to take the time to, for myself, understand where we are and where we're going.
And hopefully a reasonably timeless way that's not going to be dependent on whatever the frontier model of the day is.
And I think, especially anyone who cares about education, cares about work, cares about what skills their kids should learn to keep up,
how we can all be more productive, what the future of admissions and recruiting is going to look like,
hopefully this will be useful for them, too.
Yeah.
Well, I am incredibly grateful for the partnership that we have and, more importantly, for the work that you're doing in the world.
So, thank you so much for being with us today.
Thank you so much, Kevin.
Thank you.
A lo mejor no era Sam Alman, sino Altsalman.
Altsalman.
Altsalman.
Altsalman.
Another really incredibly impactful area where these new AI tools are going to have an enormous impact is healthcare.
So, I just wanted to share another personal anecdote with you all.
So, like I grew up in rural central Virginia and my mom and brother and most of my family still live in the place where I grew up.
My mom is like a 74-year-old southern woman and has been suffering from a thyroid condition for 26 years, which, you know, entirely under control, like she's been taking medication for it.
And just last fall, she had, like, some sort of change in her, like, in her system where, like, all of a sudden her medication wasn't working as well as it used to, which resulted in her spending a bunch of time in the hospital, like, trying to figure out what was going on.
And the healthcare system in the part of the world where my mom lives is, like, super overburdened.
Like, it's, you know, not a place where, like, tons of people are sort of moving for economic opportunity.
Like, it's one of those places where people tend to move away from to seek economic opportunity, which has impacts on everything that is happening in that part of the world.
And I sort of, as I was trying to help my mom navigate this situation, sort of, I was looking at how could AI have helped relieve the suffering that she was experiencing.
And, you know, if doctors everywhere had access to these tools, like, a lot of what she went through could have been immediately alleviated.
Like, if you just take her symptoms that she was presenting and put it into GPT-4 along with her chart, it would immediately say, like, go get this test, which is, like, going to help find root cause.
And then the results of the tests, like, could be input into the same session, which would then sort of give a set of recommendations to doctors about corsotherapy.
And, like, if that had happened in her case, like, what was six visits to the hospital, like, could have been compressed to one.
And, you know, I think about this a lot because I worry even whether or not my mom would have pulled out of a health spiral that she was in
if she hadn't had me for a son intervening on her behalf.
And I worry about all of the people in the world who don't have someone to intervene on their behalf, like, who are interacting with one of these resources that is overburdened.
And so, like, I just want us all to think about as we're imagining what the set of possibilities are for, you know, what we go do with AI.
Like, things like what Sol is doing with Khan Academy and, like, some of the amazing potential that we have to reduce suffering in the world and to, like, help make, you know, things like high quality healthcare more equitable and accessible.
Like, just super inspiring to think about.
So, you know, the last thing that I want to chat about before we get to conversation with Sam Altman is how we at Microsoft have been thinking about building applications on top of this incredible platform that is a project right now.
And so, the challenge, I think, for us, and I think it's the same challenge that all of us face, is that we really want to focus on things that have made the change of the world that have made the change of the world.
And so, the transition from impossible to merely difficult.
Like, that's where all the interesting stuff is. Look at the history of platform revolutions.
Like, that's where all the interesting companies emerge from. It's where all the innovation happens. It's where all the value gets unlocked.
And in the case of technology platforms that are sort of exponentially progressing, it's like the only reasonable place to go aim.
Because if you're aiming somewhere different, like, the platform is becoming so much more capable and so much cheaper over time that everything that you sort of have in your imagination that's too expensive to do right now or too fragile is going to become cheap and robust before you can even blink your eye.
And so, like, that is really, you know, the thing more than anything else that I would say to all of you to take away from what I'm saying here today is, like, really focus on those phase transitions.
So, while you all have been out there, you know, grinding away, building really extraordinary things over the past year with all of these AI tools that are coming, like, we've been hard at work trying to make forward progress on our AI platform.
So, yeah, we talked a lot about how we're optimizing the current frontier, like, making things cheaper and making them more powerful and complete.
But we've also been hard at work building new supercomputing infrastructure and working with our partners at OpenAI to push that frontier forward.
And like we showed this slide at the beginning, like, there's this, like, really beautiful relationship right now between sort of exponential progression and compute that we're applying to building the platform to the capability and power of the platform that we get.
And I just wanted to sort of, without mentioning numbers, which is sort of hard to do, to give you all an idea of scaling of these systems.
So, in 2020, we built our first AI supercomputer for OpenAI.
It's the supercomputer supercomputer that trained GPT-3.
And so, like, we're going to just choose marine wildlife as our scale-wise.
So, in that system, we can think of that system, how big a shark.
So, the next system that we built, well, well, yeah.
GPT-4 is about as big as an orca.
And that is the system that we delivered in 2022 that trained GPT-4.
The system that we have just deployed is, like, scale-wise, about as big as a whale relative to, you know, this shark size supercomputer and this orca size supercomputer.
And it turns out that you can build a whole hell of a lot of AI with a whale size supercomputer.
And so, you know, one of the things that I just want everybody to really, really be thinking clearly about, and, like, this is going to be our segue to talking with Sam, is the next sample is coming.
So, like, this whale size supercomputer is hard at work right now building the next set of capabilities that we're going to put into your hands so that you all can do the next round of amazing things with it.
And so, with that, I'd like to bring Sam Altman to the stage.
Come on us.
Good to see you.
You too.
So, you are one of the craziest people on the planet.
It's a wild week.
Yeah, it's a wild week.
It's a wild year, man.
But so, I really appreciate you taking time out to chat with us today.
So, I guess what I really wanted to start our conversation about, and, like, I asked you this question last week, is, you know, there's just been an extraordinary amount of change over the past year and a half year.
Like, what has been the thing that has surprised you most, like, particularly relevant to an audience of developers?
Yeah.
I mean, I'm delighted to be here.
Obviously, great to see you, but developers have been such a core part of what's been happening this last year and a half.
There's millions of people building on the platform.
What people are doing is totally amazing.
And the scene of adoption and talent and figuring out what to build with all of this, over what has really not been very long, like, when we put GPT-3 out in the API, some people thought it was cool, but it was narrow where the utility happened.
And seeing what people have done with GPT-4 and seeing now what's happening with GPT-4-0, even though it's new and hasn't been out that long, is quite remarkable.
I've never seen a technology get adopted so quickly in such a meaningful way.
What people are building, how people are finding out how to do things that we never even thought of possible, which is why it's always great to have an API.
That's been very cool to see.
And I think, you know, what you just said is one of the most important points to me.
Like, there's a version of AI that could have existed that is, you know, like a bunch of smart people, like building, you know, things at extraordinary scale,
and then just building it into a bunch of products where everybody gets to passively use them.
Like, the really brilliant thing that you all have done is, like, taken the exact same set of things
and, like, decided to make it available to, like, any developer who's able to sign up for an API key.
Yeah, we try to be really thoughtful about what makes a good API for this.
There's going to be all kinds of ways people can use this.
But the more this can just be a layer that gets built into every product, every service, the better.
And we've tried to make it such that if you want to add intelligence to whatever you are doing, any product, any service, we make that very easy.
Yeah. And, like, again, I think the progress has been stunning.
So, you know, I think, you know, the setup for, like, introducing you onto the stage here was...
I saw that big blue whale.
Like, you know, you're making good use of the whale size computer right now.
And so, like, without, like, getting too specific, which we can't be, obviously.
Like, what are the category of things that people should be expecting over the next, you know, k-months.
K-months. Qué cabrones. Qué crípticos.
The most important thing, and this sounds like the most boring, obvious, trite thing I can say, but I think it's actually much deeper than it sounds.
The most important thing is that the models are just going to get smarter, generally across the board.
There will be a lot of other things, too, which we can talk about.
But if you think about what happened from GPT-3 to 3.5 to 4, it just got smarter.
And you could use it for all these things.
It got a little more robust.
It got much safer, both because the model got smarter and we put much more work into building the safety tools around it.
It got more useful.
But the underlying capability, this amazing emergent property of, like, we actually are seeming to increase the general capability of the model across the board,
that's going to keep happening.
And the jump that we have seen in the utility that a model can deliver with each of those half-step jumps and smartness, it's quite significant each time.
So as we think about the next model and the next one and the incredible things that developers are going to build with that, I think that's the most important thing to keep in mind.
Also, speed and cost really matter to us.
With GPT-4, we were able to bring the price down by half and double the speed.
New modalities really matter.
Voice mode has been actually a genuine surprise for me and how much I like the new voice mode.
And when people start integrating that, I think that'll matter.
But it's the overall intelligence that'll be coming that I think matters the most.
So you, for a while now, have been one of the most successful partners in the world.
No es nada nuevo, lo he dicho en otras entrevistas, pero es importante que lo siga diciendo y que lo escuchemos.
Van a ser más inteligentes.
La siguiente versión.
Eso es importante.
So you know, there are 5,000 people in the room and there are about 200,000 people online right now.
What's your advice to them?
Is like, they think about how to spend their precious time given what's happening in the world?
Like, what's your advice?
Two things.
Number one, this is probably the most exciting time to be doing, building a product, doing a startup, whatever it is,
that we have seen at least since the mobile boom and probably, I would say, since the internet.
And maybe even bigger than that.
We don't know yet.
Yeah.
But the big opportunities, the big, you know, the ability to sort of build something new
and really kind of like change the landscape, that comes at the platform shift times.
And we haven't had a platform shift in a while.
And this looks like it's really, truly a platform shift.
And so my biggest piece of advice is like, this is a special time and take advantage of it.
This is like not the time to delay what you were planning to do or the next thing.
Like, this is a special moment and a few years where a lot of stuff is going to happen
and a lot of like great new things are going to get going.
The second thing also about platform shifts is when the mobile phone revolution started,
or really got going, like 2008, 2009, you would see people say, we're a mobile company.
You know, we have a mobile app.
And then only a few years later, no one said they were a mobile company because it was like table stakes.
And amazing new technology, which I would bias but will put AI in that category,
it doesn't get you out of the hard work of building a great product or a great company or a great service.
You still have to do it.
AI alone is a new enabler, but it does not automatically break the rules of business.
And so you can use this as like a new thing to do, but you still have to figure out how you're going to build enduring value in whatever you're doing.
And it's easy to lose sight of that and the excitement of the gold rush.
Yeah.
So one last thing before we let you go.
So, you know, you and I and like members of your team and members of the Microsoft team have been doing really an extraordinary volume of work over the past year and a half, two years,
thinking about safe deployment of an awful lot of AI capability, like everything from, you know, APIs and developer tools to end products.
And, you know, I think we, you know, have accumulated a really interesting volume of experience, like experience that's sort of hard to get if you're not doing deployments at this scale.
So I, you know, and I think you just mentioned something that's like really, really interesting, like part of, you know, part of the interesting and surprising progression of capabilities of these models means that they're more useful in like helping to like make AI systems safer.
So I don't know whether you had some thoughts you wanted to share there as well.
You know, when we first developed this technology, we spent a lot of time talking about, all right, we've made this thing.
It's cool.
Are we ever going to be able to get it to an acceptable level of robustness and safety?
And now we kind of take that for granted with GPT-4.
You know, if you use it, it's far from perfect.
We have more work to do.
But it is generally considered robust enough and safe enough for a wide variety of uses.
And that took an enormous amount of work across both teams.
And fundamental research.
Like when we started this, we're like got this thing.
We've got this thing.
It looks like kind of impressive and kind of not.
And even then, how are we going to like get it aligned?
And what does it mean?
You know, what is it going to take to be able to deploy it?
The number of different teams we've had to build up to go from research and creation of the model to safety systems to figuring out policy to how we do the monitoring.
That's a huge amount of work, but it's necessary to be able to deploy these and use them.
Like, you know, when you take a medicine, you want to know it's going to be safe.
When you use an AI model, you want to know it's going to be robust and behave the way you want.
And have been super proud of the work that teams have done together.
And I think it's amazing how fast this much work has happened and that we can all now use this and say, oh yeah, it basically works.
As the models get more powerful, there will be many new things we have to figure out as we move towards AGI.
The level of complexity and I think the new research that it'll take will increase.
I'm sure we'll do that together.
But we view this as a gate on being able to put these things out into the world, which we really want to do.
Yeah, it's definitely table stakes.
So thank you so much for being with us here today.
Like, I really appreciate your time.
It's awesome to hear from you.
Awesome.
Y se va.
Y hasta aquí, amigos.
Hasta aquí la aparición de Samarman.
No iban a presentar hoy nada.
So my very last thing for you all is the following call to action.
So Microsoft and our partners like OpenAI are spending.
Quizás el único dato que sabemos ahora mismo y es muy importante es lo de la ballena.
¿Vale?
El GPT-3 era un tiburón.
GPT-4 es una horca en nivel de ocupación, de infraestructura que ellos les han cedido.
GPT-5 es una ballena.
Es lo máximo que le han podido sacar a OpenAI para contar hoy en el Microsoft Build.
Bueno.
Hashtyan mucho broccoli que era.
We're making all of these things matter.
Like, we could build platforms all day, all night, and if you all didn't have the great ideas,
like you didn't understand the consequential impact that you wanted to have on the world,
it would all be for nothing.
And so I'm incredibly grateful for all of the things that you all have done on the platform over the past year,
Y estoy increíblemente emocionado de ver lo que todos van a hacer en el año adelante.
Muchas gracias.
Conclusiones rápidas de lo que hemos visto.
Me voy a poner un poco en grande.
Hola, ¿qué tal?
Conclusiones.
Si el evento de Google fue coming soon, coming soon, coming soon, later this year, later this year, este evento ha sido copilot, copilot, copilot, copilot, copilot.
Es muy curioso, lo estaba pensando.
Quizás el evento de Google pecó de enseñar muchas cosas, productos muy diferentes o con rebrandings muy diferentes
que hacen que te pierdas en un mar de aplicaciones tan distintas, pero que se parecen todas, que ninguna permea como tal.
Y en cambio Microsoft, a lo mejor es que no hay un punto correcto o en el punto medio está la virtud,
pero aquí a Microsoft le pasa casi lo contrario.
Es como todo se llama, todo conecta a la misma idea que es copilot, al final acaba saturado.
Pero bueno, lo que hemos visto es un Microsoft, quizás, creo que lo que ha pasado este año con el Microsoft Build es,
no es tan sorprendente porque mucho de lo que ahora están diciendo, hey, esto ya está aquí,
fue lo que hace un año, cuando ya se sabía que estaba GPT-4, anunciaron que, bueno, esto va a llegar,
es decir, esto va a evolucionar de esta forma, ahora ya estamos en el punto en el que hemos dado ese paso
y ya todo esto está aquí, pero están trabajando también en la infraestructura, ¿no?
Se han centrado mucho en el stack, en hablarnos desde nivel de, pues, las librerías de Windows
para poder ejecutar, que si PyTorch, que si el DirectML, un montón de cosas, subiendo a los procesadores
también por debajo, subiendo a la capa de aplicación.
Nos han hablado de muchas cosas de este stack de IA para decirnos un poco que, bueno,
que están preparando la plataforma, Windows, los ordenadores, todo para ejecutar la IA.
Si alguien tiene la sensación de que lo de la IA puede ser hype y una ola pasajera,
no tendríamos a Microsoft invirtiendo y redefiniendo tanto el stack de la compañía
orientada a lo que va a venir. ¿Qué es lo que va a venir? Pues, lo que han dicho ahora.
Podemos confiar en que seguramente gran parte de lo que hemos visto hoy de Copilot de Microsoft,
que está basado en GPT-4, lo sabemos, pues, va a funcionar hasta cierto punto, ¿no?
Sabemos que muchas de estas aplicaciones sufren de alucinaciones, que se equivocan en razonamientos,
todo esto falla. Si de repente tú garantizas que, al igual que hoy en día con GPT-4,
tú sabes que no vas a ver una falta de ortografía, tú garantizas que con GPT-5
la mayoría de tareas sencillas se van a hacer bien, todo eso cobra un sentido espectacular.
Y es lo que Microsoft está trabajando. Lo interesante de esto, desde mi punto de vista,
es que el año pasado Microsoft pudo presentar un montón de cosas interesantes
porque OpenAI les había entregado GPT-4, ¿vale? Ya habían presentado GPT-4, ya tenemos la tecnología.
Este año, como OpenAI todavía no ha sacado nada, lo han sacado el GPT-4 hoy, poco más,
sí, han podido enseñar un par de demos, pero poco más. Lo han habilitado en la API de Azure,
muy interesante, pero no han podido contarnos nada más de lo que podría estar por venir
porque OpenAI al final es la que está abriendo cabina. Y eso es curioso. Al final,
las presentaciones que hemos visto la semana pasada y esta semana son muy diferentes, ¿no?
La de OpenAI concisa, corta, nueva tecnología, pum. Y luego las otras son empresas con un montón de productos,
con un montón de servicios, trayéndote integraciones, pero que no son en sí mismas muy novedosas.
Lo dije en el directo anterior, son muy 2003, ¿no? Son cosas que ya en 2003 podríamos imaginarnos
que podían salir, no son novedosas, pero que son interesantes porque ahora se están implementando
de verdad en nuestras herramientas. Pero la locomotora de todo este tren, del tren del hype,
podríamos llamarle, la locomotora a día de hoy sigue siendo OpenAI.
No hay una empresa que lo esté igualando, al menos en las expectativas. También te digo,
todo esto también es muy en lo que dicen que va a llegar. Luego habrá que ver si es cierto,
pero si confiamos que siguen el patrón que han seguido los años anteriores con GPT2, GPT3, GPT4,
por ahora ellos llevan la delantera. Google creo que está haciendo un muy buen trabajo
con toda la familia de Gemini. Están llegando a lugares donde OpenAI todavía no ha entrado,
como ventanas de contexto muy largas y cosas. La multimodalidad también la han conseguido
implementar antes que OpenAI, pero OpenAI ahora mismo sigue avanzando y sigue marcando
los tiempos. Y Microsoft, de la mano, pues creo que realmente tiene un compañero de viaje
con el que están muy contentos porque si le siguen dando toda esta tecnología
de forma anticipada, Microsoft tiene una ventaja competitiva enorme porque pueden empezar
a implementar, desarrollar, pues todos los PCs, el sistema operativo, las aplicaciones
con meses de anterioridad, lo cual es una ventaja competitiva bestial frente a la competencia.
Eso es un poco el análisis de lo que se puede comentar, de lo que hemos visto hoy.
Hemos visto, pues sí, han hablado de los modelos FII, los modelos pequeñitos,
los que ellos llaman, que creo que esto es una cosa que Microsoft hace mucho hincapié,
los SMLs, a ver, no, SLMs, Small Language Models, modelos pequeñitos.
También muy interesante, ¿no? Como se ha visto ya aquí implementada la estrategia
que también ha seguido Meta con los modelos OpenSource de saco llama 1, llama 2, llama 3,
dejo que la comunidad OpenSource trabaje con la infraestructura, me la mejore,
me optimice los modelos, me haga gran parte del trabajo y luego me aprovecho
de estos modelos para integrarlo yo en mi propio stack tecnológico, ¿no?
Pues en las gafas de Ray-Ban, en las aplicaciones de redes sociales, etc.
Aquí hemos visto lo mismo con FII 3.
Microsoft sacó FII 1, FII 2, FII 3 y ahora han comentado que han sacado el FII in Silico,
que va a ser un modelo pequeñito que seguramente se ejecute en dispositivos móviles en local,
en, yo qué sé, en la Surface y todo esto.
Modelos que se van a poder ejecutar en local y que seguramente sirva para darle potencia
a mucho de la IA que va a entrar en Windows, que seguramente va a ser mucha.
Creo que no ha habido grandes anuncios, no ha habido nada excesivamente espectacular.
Bueno, con la charla de ayer al final se sube casi a cuatro horas de contenido de presentación.
De ahí se puede destilar algunas cosillas chulas, los PC, algunas de las novedades que han presentado,
pero nada súper, súper espectacular.
O sea, trabajo de actualizar las aplicaciones a todo lo que los modelos pueden ofrecer a día de hoy
y poco más, y poco más.
Pero bueno, me ha gustado mucho que ya se formalice en un escenario.
Creo que es la primera vez que en un escenario, es decir, en un evento oficial,
no en una entrevista del ex Friedman o de otra cosa, se habla del próximo modelo GPT.
Y no hablo de GPT-5 porque el propio Sam Alman en otras ocasiones ha dicho que no sabe si se va a llamar GPT-5
o si se llamará de otra forma.
Es muy interesante.
Lo que han comentado es muy interesante.
También, en esta analogía, creo que no han hecho trampas.
Creo que cuando dicen tiburón, orca, ballena, se refieren a los modelos GPT.
Pero realmente si la métrica que nos querían contar con esto era la cantidad de infraestructura
que cede Microsoft a OpenAI, podríamos pensar que esa infraestructura no está dedicada al 100% a los modelos GPT.
OpenAI tiene Sora, tiene otros modelos, tiene una capa de aplicación mucho más grande que,
o sea, un servicio que ofrecerá a los usuarios mucho más grande que hace dos años.
Aún así, creo que no se refieren a eso.
Creo que querían hacer la analogía con los modelos GPT-2, GPT-3, GPT-4.
Y bueno, pues ya sabemos que GPT-5 va a ser una ballena.
Espero que no sea igual de lento que una ballena, pero al menos que sí sea de grande y de potente.
Además, esto es interesante porque justo han hablado del tema de cuánta infraestructura le ceden a OpenAI.
Y esto me lo dijo alguien interno de Microsoft, que se nota cuando se le cede recursos a OpenAI
por la disponibilidad de recursos que suele haber en parte de la infraestructura que ellos tienen.
Entonces, es curioso porque parece que una forma de ellos poder medir, de poner un termómetro en lo que OpenAI está haciendo
es cuánta infraestructura necesitan de lo que Microsoft le da.
Chicos, chicas, espero que os haya gustado el directo.
A mi gusto, muy largo.
Yo le pido a las grandes empresas de Silicon Valley que tomen nota del evento de OpenAI.
Un poquito más largo, venga, va, 40 minutos, pero no me lo lleves a una hora y media, dos horas, tres horas,
porque si no, se hace largo.
Se hace largo y os agradezco a todos que hayáis aguantado aquí como jabatos,
que 4.000 personas hemos estado de forma estable durante todo el directo.
Ciertamente se monetiza mejor si es más largo.
Así que mira, oye, pues Microsoft, muchas gracias.
Ya compartimos los dineros de este streaming.
Y a vosotros, chicos, chicas, muchas gracias.
Muchísimas gracias por estar aquí.
Se acaban los eventos de mayo.
Ahora recuperamos la publicación de vídeos normales.
Y el próximo evento que sí cubriremos también por tener todo el stack completo
y que es un evento que puede ser curioso va a ser el de Apple en junio, ¿vale?
El evento principal de Apple para desarrolladores donde posiblemente veamos, bueno, lo que dicen los rumores, ¿no?
Que puede ser esta unión entre OpenAI y el sistema iOS para llevar al GPT-4O como un asistente
sin la voz de Scarlett Johansson a los dispositivos de Apple.
Por ahora son rumores, pero si se confirma o si nos traen cosas de inteligencia artificial,
los de Apple, los de Cupertino, pues ahí estaremos para cubrirlo.
Aquí todos en directo, en familia, en .csv.
Un abrazo y hasta la próxima.
Chao, chao.
Pónganle likes, perros, como dicen en el chat.
Tenéis aquí un par de vídeos que podéis ojear.
En realidad no porque todavía no está puesto, pero los que veáis esto resubido, si lo tenéis.
Bueno, venga, que me voy ya.
Chao.