This graph shows how many times the word ______ has been mentioned throughout the history of the program.
The following is a conversation with Vladimir Vapnik.
Part two, the second time we spoke on the podcast.
He's the co-inventor of support vector machines,
support vector clustering, VC theory,
and many foundational ideas and statistical learning.
He was born in the Soviet Union,
worked at the Institute of Control Sciences in Moscow,
then in the US, worked at AT&T, NEC Labs,
Facebook AI Research,
and now is a professor at Columbia University.
His work has been cited over 200,000 times.
The first time we spoke on the podcast
was just over a year ago, one of the early episodes.
This time, we spoke after a lecture he gave
titled Complete Statistical Theory of Learning
as part of the MIT series of lectures
on deep learning and AI that I organized.
I'll release the video of the lecture
in the next few days.
This podcast and lecture are independent from each other,
so you don't need one to understand the other.
The lecture is quite technical and math heavy,
so if you do watch both,
I recommend listening to this podcast first,
since the podcast is probably a bit more accessible.
This is the Artificial Intelligence podcast.
If you enjoy it, subscribe on YouTube,
give it five stars on Apple Podcasts,
support it on Patreon,
or simply connect with me on Twitter.
And Lex Friedman spelled F-R-I-D-M-A-N.
As usual, I'll do one or two minutes of ads now
and never any ads in the middle
that can break the flow of the conversation.
I hope that works for you
and doesn't hurt the listening experience.
This show is presented by Cash App,
the number one finance app in the App Store.
When you get it, use code LEX Podcast.
Cash App lets you send money to friends, buy Bitcoin,
and invest in the stock market with as little as $1.
Brokerage services are provided by Cash App Investing,
a subsidiary of Square and member SIPC.
Since Cash App allows you to send
and receive money digitally, peer-to-peer,
and security in all digital transactions is very important,
let me mention the PCI Data Security Standard,
PCI DSS level one,
that Cash App is compliant with.
I'm a big fan of standards for safety and security,
and PCI DSS is a good example of that,
where a bunch of competitors got together
and agreed that there needs to be a global standard
around the security of transactions.
Now, we just need to do the same
for autonomous vehicles and AI systems in general.
So again, if you get Cash App from the App Store,
or Google Play, and use the code LEX Podcast,
you get $10, and Cash App will also donate $10 to first,
one of my favorite organizations
that is helping to advance robotics and STEM education
for young people around the world.
And now, here's my conversation with Vladimir Vapnek.
You and I talked about Alan Turing yesterday,
a little bit, and that he,
as the father of artificial intelligence,
may have instilled in our field an ethic of engineering,
and not science, seeking more to build intelligence
rather than to understand it.
What do you think is the difference
between these two paths of engineering intelligence
and the science of intelligence?
It's a completely different story.
Engineering is a mutation of human activity.
You have to make a device which behaves as human behavior,
have all the functions of human.
It does not matter how you do it,
but to understand what is intelligence about
is quite a different problem.
So, I think, I believe that it's somehow related
to predicate we talked yesterday about,
because look at the Vladimir Props idea.
He just found 31 he predicates.
He called it units,
which can explain human behavior,
at least in Russian tales.
He looked at Russian tales and derived from that,
and then people realized that it's more white
than in Russian tales.
It isn't TV, in movie serials, and so on and so on.
So, you're talking about Vladimir Prop,
who in 1928 published a book, Morphology of the Folk Tale,
describing 31 predicates that have this kind of sequential
structure that a lot of the stories narratives follow
in Russian folklore and in other contexts.
We'll talk about it.
I'd like to talk about predicates in a focused way,
but if you allow me to stay zoomed out
on our friend Alan Turing.
He inspired a generation with the imitation game.
Yes.
If we can linger a little bit longer,
do you think learning to imitate intelligence
can get us closer to understanding intelligence?
Why do you think imitation is so far from understanding?
I think that it is different between you have different goals.
So, your goal is to create something, something useful.
And that is great.
And you can see how much things was done,
and I believe that it will be done even more,
self-driving cars and also this business.
It is great.
And it was inspired by Turing vision.
But understanding is very difficult.
It's more or less philosophical category.
What means understanding the world?
I believe in a scheme which starts from Plato,
that there exists a world of ideas.
I believe that intelligence is a world of ideas.
But it is a world of pure ideas.
And when you combine these reality things,
it creates, as in my case, a variance,
which is very specific.
And I believe that the combination of ideas
in a way to construct an invariant is intelligence.
But first of all, predicate.
If you know predicate, and hopefully then,
not too much predicate exists.
For example, 31 predicate for human behavior,
it is not a lot.
Vladimir Prop used 31, you can even call it predicate,
31 predicates to describe stories, narratives.
So, you think human behavior, how much of human behavior,
how much of our world, our universe,
all the things that matter in our existence
can be summarized in predicates of the kind
that Prop was working with.
I think that we have a lot of form of behavior.
But I think that predicate is much less.
Because even in these examples, which I gave you yesterday,
you saw that predicate can be, one predicate
can construct many different invariants,
depending on your data.
They're applying to different data,
and they give different invariants.
But pure ideas, maybe not so much.
Not so many.
I don't know about that.
But my guess, I hope,
that's why challenge about digit recognition,
how much you need.
I think we'll talk about computer vision
and 2D images a little bit in your challenge.
That's exactly about intelligence.
That's exactly about, no,
that hopes to be exactly about the spirit of intelligence
in the simplest possible way.
Absolutely. You should start this simplicity way.
Otherwise you will not be able to do it.
Well, there's an open question,
whether starting at the MNIST digit recognition
is a step towards intelligence,
or it's an entirely different thing.
I think that to beat records using 100, 200 times less examples,
you need intelligence.
Because you use this term,
and it would be nice,
I'd like to ask simple, maybe even dumb questions.
Let's start with a predicate.
In terms of terms and how you think about it,
what is a predicate?
I don't know.
I have a feeling,
formally, they exist.
But I believe that predicate for 2D images,
one of them is symmetry.
Hold on a second. Sorry.
Sorry to interrupt and pull you back.
At the simplest level,
we're not being profound currently.
A predicate is a statement of something that is true.
Yes.
Do you think of predicates as somehow probabilistic in nature,
or is this binary,
this is truly constraints of logical statements about the world?
In my definition, the simplest predicate is function.
And you can use this function to make inner product that is predicate.
What's the input, and what's the output of the function?
Input is x, something which is input in reality.
Say, if you consider digit recognition, it's pixel space.
Yes.
Input.
But it is function in pixel space.
But it can be any function from pixel space.
And you choose, and I believe that there are several functions,
which is important to have an understanding of images.
One of them is symmetry.
It's not so simple construction,
as I described with literarity, with all this stuff.
But another, I believe, I don't know how many,
is how well-structurized is picture.
Structurized?
Yeah.
What do you mean by structurized?
It is formal definition.
Say, something heavy on the left corner,
not so heavy in the middle and so on.
You describe in general concept of what you assume.
Concepts, some kind of universal concepts.
Yeah.
But I don't know how to formalize this.
Do you?
So this is the thing.
There's a million ways we can talk about this.
I'll keep bringing it up.
But we humans have such concepts when we look at digits.
But it's hard to put them, just like you're saying now,
it's hard to put them into words.
You know, that is example.
When critics in music trying to describe music,
they use predicate, and not too many predicate,
but in different combination.
But they have some special words for describing music.
And the same should be for images.
But maybe there are critics who understand
essence of what this image is about.
Do you think there exists critics who can summarize
the essence of images, human beings?
I hope so, yes.
But that explicitly state them on paper.
The fundamental question I'm asking is,
do you think there exists a small set of predicates
that will summarize images?
It feels to our mind, like it does,
that the concept of what makes a two and a three and a four.
No, no, no, it's not on this level.
It should not describe two, three, four.
It describes some construction which allow you to create invariance.
And invariance, sorry to stick on this, but terminology.
In invariance, it is property of your image.
Say, I can say, looking at my image,
it is more or less symmetric,
and I can give you a value of symmetry, say,
level of symmetry using this function which I gave yesterday.
And you can describe that your image has these characteristics.
Exactly in the way how musical critics describe music.
So, but this is invariant applied to specific data,
to specific music, to something.
I strongly believe in these plot ideas,
that there exists a world of predicated reality,
and predicated reality is somehow connected,
and you have to do that.
Let's talk about Plato a little bit.
So, you draw a line from Plato to Hegel to Wigner to today.
Yes.
So, Plato has forms, the theory of forms.
There's a world of ideas, and a world of things,
as you talk about, and there's a connection.
And presumably, the world of ideas is very small,
and the world of things is arbitrarily big.
But they're all, what Plato calls them, like, it's a shadow.
The real world is a shadow from the world of forms.
Yeah, you have projection.
Projection.
Of a world of ideas.
Yeah, very poetic.
In reality, you can realize this projection using invariance,
because it is projection on specific examples,
which creates specific features of specific objects.
So, the essence of intelligence is,
while only being able to observe the world of things,
try to come up with a world of ideas.
Exactly.
Like in this music story, intelligent musical critics
know all this world and have a feeling about what it is.
I feel like that's a contradiction, intelligent music critics.
But I think music is to be enjoyed in all its forms.
The notion of critic, like a food critic.
No, I don't want to touch emotion.
That's an interesting question.
There's emotion, there's certain elements of the human psychology,
of the human experience, which seem to almost contradict intelligence and reason.
Like emotion, like fear, like love.
All of those things, are those not connected in any way to the space of ideas?
That's, I don't know.
I just want to be concentrated on a very simple story, on digit recognition.
So, you don't think you have to love and fear death in order to recognize digits?
I don't know, because it's so complicated.
It involves a lot of stuff, which I never consider.
But I know about digit recognition.
And I know that for digit recognition, to get the records from a small number of observations,
you need predicate.
But not special predicate for this problem.
But universal predicate, which understand the world of images.
Of visual information.
But on the first step, they understand the world of handwritten digits,
or characters, or something simple.
So, like you said, symmetry is an interesting one.
No, that's what I think, one of the predicates related to symmetry.
The level of symmetry.
Okay, degree of symmetry.
So, you think symmetry at the bottom is a universal notion,
and there's degrees of a single kind of symmetry,
or is there many kinds of symmetries?
Many kinds of symmetries.
There is a symmetry, anti-symmetry, say letter S.
So, it has vertical anti-symmetry.
And it could be diagonal symmetry, vertical symmetry.
So, when you cut vertically the letter S.
Yeah, then the upper part and lower part in different directions.
Yeah, inverted, along the y-axis.
But that's just like one example of symmetry, right?
Right, but there is a degree of symmetry.
If you play all this derivative stuff to do tangent distance,
whatever I describe, you can have a degree of symmetry.
And that is describing reason of image.
It is the same as you will describe this image,
saying about digits, it has anti-symmetry,
digits are symmetric, more or less look for symmetry.
Do you think such concepts like symmetry, predicates like symmetry,
is it a hierarchical set of concepts,
or are these independent distinct predicates
that we want to discover some set of?
There is a degree of symmetry.
And you can, this idea of symmetry make very general,
like degree of symmetry.
If degree of symmetry can be zero, no symmetry at all.
Or degree of symmetry, say, more or less symmetrical.
But you have one of these descriptions,
and symmetry can be different.
As I told, horizontal, vertical, diagonal,
and anti-symmetry is also a concept of symmetry.
What about shape in general?
Symmetry is a fascinating notion, but...
No, no, I'm talking about digit.
I would like to concentrate on all,
I would like to know, predicate for digital recognition.
Yes, but symmetry is not enough for digit recognition, right?
It is not necessarily for digital recognition.
It helps to create a variant which you can use
when you will have examples for digital recognition.
You have regular problem of digital recognition.
You have examples of the first class or second class.
Plus, you know that there exists concept of symmetry.
And you apply, when you're looking for decision rule,
you will apply concept of symmetry,
of this level of symmetry which you estimate from me.
So let's talk.
Everything comes from weak convergence.
What is convergence?
What is weak convergence?
What is strong convergence?
I'm sorry, I'm going to do this to you.
What are we converging from and to?
You're converging.
You would like to have a function.
The function which, say, indicator function,
which indicates your digit 5, for example.
A classification task.
Let's talk only about classification.
So classification means you will say,
whether this is a 5 or not, or say which of the 10 digits it is.
Right.
I would like to have these functions.
Then I have some examples.
I can consider property of these examples.
Say symmetry.
And I can measure level of symmetry for every digit.
And then I can take average from my training data.
And I will consider only functions of conditional probability,
which I'm looking for my decision rule, which applying to digits
will give me the same average as I absorb on training data.
So actually, this is different level of description of what you want.
You show not one digit.
You show this predicate, show general property of all digits
which you have in mind.
If you have in mind digit 3, it gives you property of digit 3.
And you select as admissible set of function,
only function which keeps this property.
You will not consider other functions.
So you're immediately looking for smaller subset of function.
That's what you mean by admissible function.
Which is still a pretty large for the number 3.
It is pretty large, but if you have one predicate.
But according to, there is a strong and weak convergence.
Strong convergence is convergence and function.
You're looking for the function on one function
and you're looking for another function.
And square difference from them should be small.
If you take difference in any points, make a square,
make an integral, and it should be small.
That is convergence and function.
Suppose you have some function, any function.
So I would say, I say that some function converge to this function.
If integral from square difference between them is small.
That's the definition of strong convergence.
That definition of strong convergence.
Two functions, the integral of the difference is small.
It is convergence in functions.
But you have different convergence in functionals.
You take any function, you take some function phi.
And take inner product, this function is f function.
f0 function, which you want to find.
And that gives you some value.
So you say that set of functions converge in inner product to this function.
This value of inner product converge to value f0.
That is for one phi.
But v convergence requires that it converge for any function of Hilbert space.
If it converge for any function of Hilbert space,
then you would say that this is v convergence.
You can think that when you take integral, that is property,
integral property of function.
For example, if you will take sine or cosine,
it is coefficient of say Fourier expansion.
So if it converge for all coefficients of Fourier expansion.
So under some condition, it converge to function you're looking for.
But v convergence means any property.
Convergence not point wise, but integral property of function.
So v convergence means integral property of functions.
When I talk about predicate, I would like to formulate
which integral properties I would like to have for convergence.
So and if I will take one predicate, predicate its function, which I measure property.
If I will use one predicate and say I will consider only function,
which give me the same value as this predicate,
I selecting set of functions from functions which is admissible
in the sense that function which I looking for in this set of functions.
Because I checking in training data, it gives the same.
Yes, it always has to be connected to the training data in terms of...
Yeah, but property, you can know independent on training data.
And this guy prop.
So there is formal property, 31 property and...
A fairy tale, Russian fairy tale.
But Russian fairy tale is not so interesting.
More interesting that people apply this to movies, to theatre, to different things.
The same works, they're universal.
Well, so I would argue that there's a little bit of a difference between
the kind of things that were applied to which are essentially stories and digit recognition.
It is the same story.
You're saying digits, there's a story within the digit.
Yeah.
So, but my point is why I hope that it possible to beat record using not 60,000,
but say 100 times less, because instead you will give predicate.
And you will select your decision not from wide set of functions,
but from set of function which keeps us predicate.
But predicate is not related just to digit recognition.
Right, so...
Like in Plato's case.
Do you think it's possible to automatically discover the predicates?
So you basically said that the essence of intelligence is the discovery of good predicates.
Yeah.
Now, the natural question is, you know, that's what Einstein was good at doing in physics.
Can we make machines do these kinds of discovery of good predicates?
Or is this ultimately a human endeavor?
Thus, I don't know.
I don't think that machine can do.
Because according to theory about weak convergence, any function from Hilbert space can be predicate.
So, you have infinite number of predicate and before you don't know which predicate is good on which.
But whatever prop show and why people call it breaks through, that there is not too many predicate
which cover most of situation happened in the world.
So there's a sea of predicates and most of the only a small amount are useful for the kinds of things that happen in the world.
I think that I would say only small part of predicate very useful.
Useful all of them.
Only very few are what we should let's call them good predicates.
Very good predicates.
Very good predicates.
So can we linger on it?
What's your intuition?
Why is it hard for a machine to discover good predicates?
Even in my talk described how to do predicate.
How to find new predicate.
I'm not sure that it is very good.
What did you propose in your talk?
No.
In my talk, I gave example for diabetes.
Diabetes, yeah.
When we achieve some percent, so then we're looking for area where some sort of predicate which I formulate
does not keeps invariant.
So if it does not keep, I retain my data.
I select only function which keeps invariant.
And when I did it, I improve my performance.
I can looking for this predicate.
I know technically how to do that.
And you can, of course, do it using machine.
But I'm not sure that we will construct the smartest predicate.
Well, this is the...
Allow me to linger on it.
Because that's the essence.
That's the challenge.
That is artificial.
That's the human level intelligence that we seek is the discovery of these good predicates.
You've talked about deep learning as a way to... the predicates they use and the functions are mediocre.
We can find better ones.
Let's talk about deep learning.
Sure.
Let's do it.
I know only Jan Slikun, convolutional network.
And what else?
I don't know.
And it's a very simple convolution.
There's not much else to know.
There's no pixel left and right.
Yes.
I can do it like that.
There's one predicate.
It is...
Convolution is a single predicate.
It's single.
It's single predicate.
Yes.
You know exactly.
You take the derivative for translation and predicate.
It should be kept.
So that's a single predicate, but humans discovered that one?
Or at least...
Not that.
That is a risk.
Not too many predicates.
That is a big story, because Jan did it 25 years ago and nothing so clear was added to deep network.
And then I don't understand why we should talk about deep network instead of talking about
piecewise linear functions, which keeps this predicate.
The counter argument is that maybe the amount of predicates necessary to solve general intelligence,
say in space of images, doing efficient recognition of handwritten digits is very small.
And so we shouldn't be so obsessed about finding...
We'll find other good predicates like convolution, for example.
There has been other advancements, like if you look at the work with attention, there's
attentional mechanisms, especially used in natural language, focusing the network's ability
to learn at which part of the input to look at.
The thing is, there's other things besides predicates that are important for the actual
engineering mechanism of showing how much you can really do given such these predicates.
That's essentially the work of deep learning is constructing architectures that are able
to be given the training data to be able to converge towards a function that can approximate,
can generalize well.
It's an engineering problem.
Yeah, I understand.
But let's talk not on emotional level, but on a mathematical level.
You have set of piecewise linear functions.
It is all possible neural networks.
It's just piecewise linear functions.
There's many, many pieces.
Large number of piecewise linear functions.
Exactly.
Very large.
But it's still simpler than, say, convolution, than reproducing internal Hilbert space, which
have a Hilbert set of functions.
What's Hilbert space?
It's space with infinite number of coordinates, a function for expansion, something like that.
So it's much richer.
And when I'm talking about closed-form solution, I'm talking about this set of functions, not
piecewise linear set, which is a particular case.
It is a small part.
So neural networks is a small part of the space of functions you're talking about?
Small set of functions.
But it is fine.
I don't want to discuss the small or big and take advantage.
So you have some set of functions.
So now, when you're trying to create architecture, you would like to create admissible set of
functions, all your tricks to use not all functions, but some subset of this set of functions.
Say, when you're introducing convolutional net, it is a way to make this subset useful for you.
But from my point of view, convolutional, it is something you want to keep some invariance,
say, translation invariance.
But now, if you understand this and you cannot explain on the level of ideas what neural network does,
you should agree that it is much better to have a set of functions.
And they say, this set of functions should be admissible.
It must keep this invariant, this invariant, and that invariant.
You know that as soon as you incorporate new invariance set of functions, because smaller
and smaller and smaller.
But all the invariants are specified by you, the human?
Yeah.
But what I hope is that there is a standard predicate like Prop Show.
That's what I want to find for digit recognition.
If we start, it is completely new area, what is intelligence about on the level starting from Plata's idea.
What is the world of ideas?
So, and I believe that it's not too many.
Yeah.
But, you know, it is amusing that mathematician doing something in neural network, in general function,
but people from literature, from art, they use this all the time.
That's right.
Invariants saying, say, it is great how people describe music.
We should learn from that.
And something on this level, but so why Vladimir Prop, who was just theoretical, who studied
theoretical literature, he found that.
You know what?
Let me throw that right back at you, because there's a little bit of a, that's less mathematical
and more emotional philosophical Vladimir Prop.
I mean, he wasn't doing math.
No.
And you just said another emotional statement, which is you believe that this Plato world of ideas is small.
I hope.
I hope.
Do you, what's your intuition though, if we can linger on it?
You know, it is not just small or big.
I know exactly.
Then when I introducing some predicate, I decrease set of functions.
But my goal to decrease set of function much.
By as much as possible.
By as much as possible.
Good predicate, which does this, then I should choose next predicate, which does decrease set as much as possible.
So set of good predicate.
It is such that they decrease this amount of admissible funds.
So if each good predicate significantly reduces the set of admissible functions that there naturally should not be that many predicates.
No, but if you reduce very well the VC dimension of the function of admissible set of function is small and you need not too much training data to do well.
And VC dimension, by the way, is some measure of capacity of this set of functions.
Right.
Roughly speaking, how many functions in this set.
So you're decreasing, decreasing, and it makes it easy for you to find function you're looking for.
So the most important part to create good admissible set of functions.
And it probably there are many ways, but the good predicate is such that that can do that.
So that's for this duck.
You should know a little bit about duck because
What are the three fundamental laws of ducks?
Looks like a duck, swims like a duck, and quacks like a duck.
You should know something about ducks to be able to.
Not necessarily.
Looks like, say, horse.
It's also good.
So it's not, it generalizes from ducks.
Yes, and talk like, and make sound like horse or something.
And run like horse and move like horse.
It is general.
It is general predicate that this applied to duck.
But for duck, you can say play chess like duck.
You cannot say play chess.
Why not?
So you're saying you can, but that would not be a good.
No, you will not reduce a lot of.
You will not do.
Yeah, you would not reduce the set of functions.
So you can, the story is formal story, mathematical story is that
you can use any function you want as a predicate.
But some of them are good.
Some of them are not because some of them reduce a lot of functions.
To admissible set of some of them.
But the question is, and I'll probably keep asking this question,
but how do we find such, what's your intuition?
Handwritten recognition, how do we find the answer to your challenge?
Yeah, yeah, I understand it like that.
I understand vault.
What to find?
What it means I knew predicate.
Yeah.
Like guy who understand music can say this word which he described
when he listened to music.
He understand music.
He use not too many different, or you can do like prop.
You can make collection.
What he talking about music.
About this, about that.
It's not too many different situation he described.
Because we mentioned Vladimir proper bunch.
Let me just mention, there's a sequence of 31 structural notions
that are common in stories.
He called units.
Units, and I think they resonate.
I mean, it starts just to give an example.
Obsension, a member of the hero's community or family
leaves the security of the home environment.
Then it goes to the introduction.
A forbidding edict or command is passed upon the hero.
Don't go there.
Don't do this.
The hero is warned against some action.
Then step three, violation of interdiction.
Break the rules, break out on your own.
Then reconnaissance.
The villain makes an effort to attain knowledge
needing to fulfill their plot.
So on.
It goes on like this.
Ends in a wedding.
Number 31.
Happily ever after.
No, he just gave description of all situations.
He understands this world.
Of folk tales.
Yeah.
Not folk stories.
But stories.
And these stories not in just folk tales.
These stories in detective serials as well.
And probably in our lives, we probably live.
Read this.
At the end, they wrote that this predicate is good for different situations.
From movie, for movie, for theater.
By the way, there's also criticism, right?
There's another way to interpret narratives from Claude Levy-Strauss.
I don't know.
I am not in this business.
No, I know.
It's theoretical literature.
But it's looking at paradise.
It's always the discussion.
But at least there is a unit.
It's not too many units that can describe.
But this guy probably gives another unit.
Or another way.
Exactly.
Another set of units.
Another set of predicates.
It does not matter who.
But they exist, probably.
My question is whether given those units, whether without our human brains to interpret
these units, they would still hold as much power as they have.
Meaning, are those units enough when we give them to the alien species?
Let me ask you.
Do you understand digit images?
No.
I don't understand.
No, no, no.
When you can recognize these digit images, it means that you understand.
You understand characters, you understand.
No, no, no, no.
It's the imitation versus understanding question because I don't understand the mechanism
by which I understand.
No, no, no.
I'm not talking about predicates.
You understand that it involves symmetry, maybe structure, maybe something else.
I cannot formulate.
I just was able to find symmetry, so degree of symmetry.
That's really good.
So this is a good line.
I feel like I understand the basic elements of what makes a good hand recognition system
my own.
Like symmetry connects with me.
It seems like that's a very powerful predicate.
My question is, is there a lot more going on that we're not able to introspect?
Maybe I need to be able to understand a huge amount in the world of ideas, thousands of
predicates, millions of predicates in order to do hand recognition.
I don't think so.
So both your hope and your intuition are such that very few predicates are enough.
You're using digits, you're using examples as well.
Theory says that if you will use all possible functions from Hilda space, all possible predicate,
you don't need training data.
You just will have admissible set of function which contain one function.
Yes.
So the trade off is when you're not using all predicates, you're only using a few good
predicates, you need to have some training data.
Yes, exactly.
The more good predicates you have, the less training data you need.
Exactly.
That is intelligent.
Still, okay.
I'm going to keep asking the same dumb question.
Handwritten recognition.
To solve the challenge, you kind of propose a challenge that says we should be able to
get state-of-the-art MNIST error rates by using very few 60, maybe fewer examples per digit.
What kind of predicates do you think you'll...
That is the challenge.
So people who will solve this problem, they will answer.
Do you think they'll be able to answer it in a human explainable way?
They just need to write function.
That's it.
But so can that function be written, I guess, by an automated reasoning system?
Whether we're talking about a neural network learning a particular function or another
mechanism?
No.
I'm not against neural network.
I'm against admissible set of function which create neural network.
You did it by hand.
Yes.
You don't do it by invariance, by predicate, by reason.
But neural networks can then reverse, do the reverse step of helping you find a function.
Just the task of a neural network is to find a disentangled representation, for example,
what they call, is to find that one predicate function that really captures some kind of
essence.
Not the entire essence, but one very useful essence of this particular visual space.
Do you think that's possible?
Listen, I'm grasping, hoping there's an automated way to find good predicates.
So the question is, what are the mechanisms of finding good predicates, ideas you think
we should pursue?
A young grad student listening right now.
I gave example, so find situation where predicate, which you're suggesting, don't create invariant.
It's like in physics.
Find situation where existing theory cannot explain it.
Find situation where the existing theory can't explain it.
So you're finding contradictions.
Find contradiction, and then remove this contradiction.
But in my case, what means contradiction?
You find function, which if you will use this function, you're not keeping invariance.
So it's really the process of discovering contradictions.
Yeah.
It is like in physics, find situation where you have contradiction for one of the property,
for one of the predicate.
Then include this predicate, making invariance, and solve again this problem.
Now you don't have contradiction.
But it is not the best way, probably, I don't know, to looking for predicate.
That's just one way.
No, no, it is brute force way.
The brute force way.
What about the ideas of what big umbrella term of symbolic AI?
In the 80s with expert systems, logic reasoning based systems.
Is there hope there to find some sort of deductive reasoning to find good predicates?
I don't think so.
I think that just logic is not enough.
It's kind of a compelling notion though, you know, that when smart people sit in a room
and reason through things, it seems compelling and making our machines do the same is also compelling.
So everything is very simple.
When you have infinite number of predicate, you can choose the function you want.
You have invariance and you can choose the function you want.
But you have to have not too many invariance to solve the problem.
So, and have from infinite number of function to select finite number and hopefully small number of functions.
Which is good enough to extract small set of admissible functions.
So they will be admissible.
It's for sure because every function just decrease set of function and leaving it admissible.
But it will be small.
But why do you think logic based systems don't, can't help?
Intuition, not...
Because you should know reality.
You should know life.
This guy like prop, he knows something.
And he tried to put in invariant his understanding.
That's the human.
Yeah, but see, you're putting too much value into Vladimir Prop knowing something.
No, it is...
It might be misinterpreted.
The story is what means you know life.
What it means.
You know common sense.
No, no.
You know something.
Common sense it is some rules.
You think so?
Common sense is simply rules.
Common sense is...
It's mortality.
It's fear of death.
It's love.
It's spirituality.
It's happiness and sadness.
All of it is tied up into understanding gravity, which is what we think of as common sense.
I don't ready to discuss so wide.
I want to discuss understand digital recognition.
Anytime I bring up love and death, you bring it back to digital recognition.
You know, it is durable because there is a challenge which I see how to solve it.
If I will have a student concentrating this work, I will suggest something to solve.
You mean handwritten recognition?
Yeah, it's a beautifully simple, elegant, and yet...
I think that I know invariance which will solve this.
You do.
I think so.
You think so.
But it is not universal.
It is maybe...
I want some universal invariance which are good not only for digital recognition.
For image understanding.
So, let me ask, how hard do you think is 2D image understanding?
So, if we can kind of intuit handwritten recognition, how big of a step leap journey is it from that?
If I gave you good...
If I solved your challenge for handwritten recognition, how long would my journey then be
from that to understanding more general natural images?
Immediately, you will understand this as soon as you will make a record.
Because it is not for free.
As soon as you will create several invariance which will help you to get the same performance
that the best neural net did using 100 times, maybe more than 100 times less examples,
you have to have something smart to do that.
And you're saying...
That is an invariant.
It is predicate.
Because you should put some idea how to do that.
Okay.
Let me just pause.
Maybe it's a trivial point.
Maybe not.
But handwritten recognition feels like a 2D, two-dimensional problem.
And it seems like how much complicated is the fact that most images are projection of
a three-dimensional world onto a 2D plane.
It feels like for a three-dimensional world, we need to start understanding common sense
in order to understand an image.
It's no longer visual shape and symmetry.
It's having to start to understand concepts of...
Understand life.
Yeah.
You're talking that there are different invariant.
Different predicate.
Yeah.
And potentially much larger number.
You know, maybe.
But let's start from simple.
Yeah, but you said that it would be...
You know, I cannot think about things which I don't understand.
This I understand.
But I'm sure that I don't understand everything there.
Yeah.
It's like constraints do as simple as possible, but not simpler.
And that is exact case.
With handwritten.
With handwritten.
Yeah.
But that's the difference between you and I.
I welcome and enjoy thinking about things I completely don't understand.
Because to me, it's a natural extension without having solved handwritten recognition
to wonder how difficult is the next step of understanding 2D, 3D images.
Because ultimately, while the science of intelligence is fascinating,
it's also fascinating to see how that maps to the engineering of intelligence.
And recognizing handwritten digits is not, doesn't help you.
It might, it may not help you with the problem of general intelligence.
We don't know.
It'll help you a little bit.
We don't know how much.
It's unclear.
It's unclear.
Yeah.
But I would like to make a remark.
Yes.
It's not from very primitive problem, make a challenge problem.
I start with very general problem, with Plato.
So you understand, and it comes from Plato to digital recognition.
So you basically took Plato and the world of forms and ideas and mapped and projecting
the clearest, simplest formulation of that big world.
You know, I would say that I did not understand Plato until recently.
And until I consider weak convergence and then predicate and then, oh, this is what
Plato told.
Can you linger on that?
Like why, how do you think about this world of ideas and world of things in Plato?
No, it is metaphor.
It's the metaphor for sure.
It's a poetic and a beautiful metaphor.
But what can you...
But it is the way how you should try to understand, have attack ideas in the world.
So from my point of view, it is very clear.
But it is line, all the time, people looking for that.
Say, Plato's and Hegel, whatever reasonable it exists, whatever exists, it is reasonable.
I don't know what he, have in mind, reasonable.
Right.
There's philosophers again.
No, no, no, no, no, no.
It is next stop of Wigner, that mathematics understand something of reality.
It is the same Plato line.
And then it comes suddenly to Vladimir Prop.
Look, 31 ideas, 31 units and describes everything.
There's abstractions, ideas that represent our world.
And we should always try to reach into that.
Yeah, but you should make a projection on reality.
But understanding is, it is abstract ideas.
You have in your mind several abstract ideas which you can apply to reality.
And reality in this case, sort of if you look at machine learning is data.
This example is data.
Data.
Okay, let me put this on you because I'm an emotional creature.
I'm not a mathematical creature like you.
I find compelling the idea, forget the space, the sea of functions.
There's also a sea of data in the world.
And I find compelling that there might be, like you said, teacher, small examples of
data that are most useful for discovering good, whether it's predicates or good functions,
that the selection of data may be a powerful journey, a useful mechanism.
But coming up with a mechanism for selecting good data might be useful too.
Do you find this idea of finding the right data set interesting at all?
Or do you kind of take the data set as a given?
I think that it is, you know, my scheme is very simple.
You have a huge set of functions.
If you will apply and you have not too many data.
If you pick up function which describes this data, you will do not very well.
Like randomly pick up?
Yeah, you will have a fit here.
It will be overfitting.
So you should decrease set of function from which you're picking up one.
So you should go some half to admissible set of functions.
And this, what about weak conversions?
So, but from another point of view, to make admissible set of functions,
you need just a deed, you just function which you will take in inner product,
which you will measure property of your function.
And that is how it works.
No, I get it. I get it. I understand it.
But let's think about examples.
You have huge set of functions and you have several examples.
If you just trying to take function which satisfies these examples,
you still will overfit.
You need decrease, you need admissible set of functions.
Absolutely.
But what say you have more data than functions?
So consider the, I mean, maybe not more data than functions because that's impossible.
But I was trying to be poetic for a second.
I mean, you have a huge amount of data, a huge amount of examples.
But amount of function can be even bigger.
I understand.
There's always a bigger boat.
Full Hilbert space.
Got you.
But you don't find the world of data to be an interesting optimization space.
The optimization should be in the space of functions.
Creating admissible set of functions.
No, even from the classical history.
From structure risk minimization.
You should organize function in the way that they will be useful for you.
Right.
And that is admissible set.
The way you're thinking about useful is you're given a small set of examples.
Small set of function which contain function by looking for.
Yeah, but as looking for based on the empirical set of small examples.
Yeah.
But that is another story.
I don't touch it because I believe that this small examples is not too small.
Say 60 per class that law of large numbers works.
I don't need uniform law.
The story is that in statistics there are two law.
Law of large numbers and uniform law of large numbers.
So I want to be in a situation where I use law of large numbers,
but not uniform law of large numbers.
Right.
So 60 is law of large numbers.
It's large enough.
I hope.
It still needs some evaluation, some bounds.
But the idea is the following.
If you trust that say this average gives you something close to expectation,
so you can talk about that, about this predicate.
And that is basis of human intelligence.
Good predicates is the discovery of good predicates is the basis of human intelligence.
The discovery of your understanding world, of your methodology of understanding world.
Because you have several functions which you will apply to reality.
Can you say that again?
You have several functions predicate, but they abstract.
Then you will apply them to reality, to your data.
And you will create in this way predicate, which is useful for your task.
But predicate are not related specifically to your task, to this task.
It is abstract functions, which being applied to many tasks that you might be interested in.
It may be many tasks.
I don't know.
Or different tasks.
Well, they should be many tasks.
Yeah, like in prop case.
It was for fairy tales, but it's happened everywhere.
Okay, so we talked about images a little bit.
Can we talk about Noam Chomsky for a second?
I don't know him personally.
Not personally, I don't know.
His ideas.
But let me just say, do you think language, human language is essential to expressing ideas as Noam Chomsky believes?
So like language is at the core of our formation of predicates.
It's like human language.
For me, language and all the story of language is very complicated.
I don't understand this.
And I am not, I thought about.
Nobody does.
I am not ready to work on that because it's so huge.
It is not for me.
And I believe not for our century.
The 21st century.
Not for 21st century.
We should learn something, a lot of stuff from simple tasks like digit recognition.
You think digital recognition, 2D image.
How would you more abstractly define digit recognition?
It's 2D image, symbol recognition, essentially.
I mean, I'm trying to get a sense, sort of thinking about it now, having worked with MNIST forever.
How small of a subset is this of the general vision recognition problem and the general intelligence problem?
Is it a giant subset?
Is it not?
And how far away is language?
You know, let me refer to Einstein.
Take the simplest problem as simple as possible, but not simpler.
And this challenge is a simple problem.
But it's simple by idea, but not simple to get it.
When you will do this, you will find something predicated, which helps you to do it.
Yeah, I mean, with Einstein, you look at general relativity, but that doesn't help you with quantum mechanics.
In other words, another story, you don't have any universal instrument.
Yes, so I'm trying to wonder which space we're in, whether handwritten recognition is like general relativity,
and then language is like quantum mechanics, so you're still going to have to do a lot of mess to universalize it.
But I'm trying to see what's your intuition and why handwritten recognition is easier than language.
I think a lot of people would agree with that, but if you could elucidate the intuition of why.
I don't think in this direction.
I'm just thinking in the direction that this is a problem, which if we will solve it well,
we will create some abstract understanding of images, maybe not all images.
I would like to talk to guys who are doing real images in Columbia University.
What kind of images? Unreal?
Real images.
Real images.
Yeah. What's their idea? Is there a predicate? What can be predicated?
I still, symmetry will play a role in real-life images, in any real-life images.
Two-D images, let's talk about two-D images, because that's what we know.
A neural network was created for two-D images.
So the people I know in vision science, for example, the people who study human vision,
that they usually go to the world of symbols and handwritten recognition,
but not really, it's other kinds of symbols to study our visual perception system.
As far as I know, not much predicate type of thinking is understood about our vision system.
They did not think in this direction.
They don't, yeah, but how do you even begin to think in that direction?
There's so much going on.
I would like to discuss with them, because if we will be able to show that it is what's working,
and theoretical scheme, it's not so bad.
So if we compare the language, language has like letters, a finite set of letters,
and a finite set of ways you can put together those letters.
So it feels more amenable to kind of analysis.
With natural images, there is so many pixels.
No, no, no, letter language is much, much more complicated.
It involves a lot of different stuff.
It's not just understanding of very simple class of tasks.
I would like to see lists of tasks where language involved.
Yes, so there's a lot of nice benchmarks now in natural language processing
from the very trivial, like understanding the elements of a sentence
to question answering, to much more complicated where you talk about open domain dialogue.
The natural question is with handwritten recognition,
it's really the first step of understanding visual information.
Right.
But even our records show that we go in the wrong direction,
because we need 60,000 digits.
So even this first step, so forget about talking about the full journey.
This first step should be taking in the right direction.
No, no, in the wrong direction, because 60,000 is unacceptable.
No, I'm saying it should be taken in the right direction,
because 60,000 is not acceptable.
If you can talk, it's great, we have half percent of error.
And hopefully the step from doing hand recognition using very few examples,
the step towards what babies do when they crawl and understand their physical environment.
I don't know what they do.
I know you don't know about babies.
What babies do, from very small examples, you will find principles
which are different from what we're using now.
And theoretically, it's more or less clear.
That means that you will use weak convergence, not just strong convergence.
Do you think these principles will naturally be human interpretable?
Oh, yeah.
So when we'll be able to explain them and have a nice presentation
to show what those principles are,
or are they going to be very abstract kinds of functions?
For example, I talked yesterday about symmetry.
Yes.
And they gave very simple examples, the same will be.
You gave a predicate of a basic for symmetries.
Yes, for different symmetries and you have...
A degree of symmetry.
That is important, not just symmetry.
Existence doesn't exist.
A degree of symmetry.
Yeah, for hand-written recognition.
No, it's not for hand-written.
It's for any images.
But I would like to apply to hand-written.
Right.
In theory, it's more general.
Okay, okay.
So a lot of the things we've been talking about falls...
We've been talking about philosophy a little bit,
but also about mathematics and statistics.
A lot of it falls into this idea,
a universal idea of statistical theory of learning.
What is the most beautiful and sort of powerful
or essential idea you've come across,
even just for yourself, personally,
in the world of statistics or statistical theory of learning?
Probably uniform convergence, which we did
with Alexei Chilvonakis.
Can you describe universal convergence?
You have law of large numbers.
So for any function, expectation of function,
average of function, convergent expectation.
But if you have set of functions
for any function, it is true.
But it should converge simultaneously
for all set of functions.
And for learning, you need uniform convergence.
Just convergence is not enough.
Because when you pick up one which gives minimum,
you can pick up one function which does not converge
and it will give you the best answer for this function.
So you need uniform convergence to guarantee learning.
So learning does not really on trivial law of large numbers,
it's really on universal.
But idea of weak convergence
exists in statistics for a long time.
But it is interesting that as I think about myself,
how stupid I was 50 years, I did not see weak convergence.
I work only on strong convergence.
But now I think that most powerful is weak convergence
because it makes admissible set of functions.
And even in all proverbs,
when people try to understand recognition about Doug law,
looks like a duck and so on, they use weak convergence.
People in language, they understand this.
But when we try to create artificial intelligence,
we won't event in different way.
We just consider strong convergence arguments.
So reducing the set of admissible functions,
do you think there should be effort put into understanding
the properties of weak convergence?
You know, in classical mathematics, in Gilbert space,
there are only two forms of convergence, strong and weak.
Now we can use both.
That means that we did everything.
And it so happened that when we use Hilbert space,
which is very rich space of continuous functions,
which has an integral square.
So we can apply weak and strong convergence for learning
and have closed form solution.
So for computationally simple.
For me, it is signed that it is right way
because you don't need any heuristic.
Whatever you want.
But now the only what left,
it is concept of what is predicate.
But it is not statistics.
By the way, I like the fact that you think the heuristics
are a mess that should be removed from the system.
So closed form solution is the ultimate goal.
No, it so happened.
Using right instrument, you have closed form solution.
Do you think intelligence, human level intelligence,
when we create it,
will have something like a closed form solution?
You know, now I'm looking on bounds, which I gave bounds for convergence.
And when I'm looking for bounds,
I thinking what is the most appropriate kernel
for this bound would be.
So we know that in say,
all our businesses we use radial basis function.
But looking on the bound,
I think that I start to understand
that maybe we need to make corrections to radial basis function
to be closer to work better for this bounds.
So I'm again trying to understand what type of kernel
have best approximation,
best fit to this bound.
Sure.
So there's a lot of interesting work that could be done
in discovering better function and radial basis functions
for bounds.
It still comes from,
you're looking to mass and trying to understand.
From your own mind, looking at the, I don't know.
But then I try to understand what will be good for that.
Yeah.
But to me, there's still a beauty.
Again, maybe I'm a descendant of Valentine touring to heuristics.
To me, ultimately, intelligence will be a mess of heuristics.
And that's the engineering answer, I guess.
Absolutely.
When you're doing self-driving cars,
the great guy who will do this,
it does not matter what theory behind that,
who has a better feeling how to apply it.
But by the way, it is the same story about predicates,
because you cannot create rule for,
situation is much more than you have rule for that.
But maybe you can have more abstract rule
than it will be less this rule.
It is the same story about ideas and ideas applied to specific cases.
But still you should reach.
You cannot avoid this.
Yes, of course.
But you should still reach for the ideas to understand the science.
Let me kind of ask,
do you think neural networks or functions can be made to reason?
What do you think?
We've been talking about intelligence,
but this idea of reasoning,
there's an element of sequentially disassembling,
interpreting the images.
So when you think of handwritten recognition,
we kind of think that there will be a single,
there's an input and output.
There's not a recurrence.
What do you think about the idea of recurrence,
of going back to memory and thinking through this sort of sequentially
mangling the different representations over and over until you arrive at a conclusion?
Or is ultimately all that can be wrapped up into a function?
Are you suggesting that let us use this type of algorithm?
When I started thinking,
I first of all started to understand what I want.
Can I write down what I want?
And then I tried to formalize.
And when I do that,
I think I have to solve this problem.
And till now I did not see a situation where...
You need recurrence.
But do you observe human beings?
Do you try to...
It's the imitation question, right?
It seems that human beings reason this kind of sequentially...
Does that inspire in you a thought that we need to add that into our intelligent systems?
You're saying...
I mean, you've kind of answered saying,
until now I haven't seen a need for it.
And so because of that, you don't see a reason to think about it.
You know, most of things I don't understand.
In reasoning, in human, it is for me too complicated.
For me, the most difficult part is to ask questions, good questions.
How it works, how people are asking questions.
I don't know this.
You said that machine learning is not only about technical things, speaking of questions,
but it's also about philosophy.
So what role does philosophy play in machine learning?
We talked about Plato, but generally thinking in this philosophical way.
Does it have...
How does philosophy and math fit together in your mind?
First ideas and then their implementation, it's like predicate, like, say, admissible set of functions.
It comes together, everything.
Because the first iteration of theory was done 50 years ago.
It's all that VC theory.
So everything there, if you have data, you can, in your set of function,
have not big capacity.
So low VC dimension, you can do that.
You can make structural risk minimization, control capacity.
But you was not able to make admissible set of function good.
No one suddenly realized that we did not use another idea of convergence, which we can.
Everything comes together.
But those are mathematical notions.
Philosophy plays a role of simply saying that we should be swimming in the space of ideas.
Let's talk what is philosophy.
Philosophy means understanding of life.
So understanding of life, say, people like Plato,
they understand on very high abstract level of life.
So in whatever I doing, it's just implementation of my understanding of life.
But every new step, it is very difficult.
For example, to find this idea that we need big convergence was not simple for me.
So that required thinking about life a little bit.
Hard to trace, but there was some thought process.
I was thinking about the same problem for 50 years somehow.
And again, and again, and again.
I tried to be honest, and that is very important, not to be very enthusiastic,
but concentrate on whatever we were not able to achieve, for example.
And understand why.
And now I understand that because I believe in math, I believe in Wigner's idea.
But now when I see that there are only two way of convergence, and we're using both,
that means that we must do as well as people doing.
But now exactly in philosophy and what we know about predicate,
how we understand life, can we describe as a predicate.
I thought about that, and that is more or less obvious, level of symmetry.
But next I have a feeling it's something about structures.
But I don't know how to formulate, how to measure structure and all this stuff.
And the guy who will solve this challenge problem, then when we were looking how he did it,
probably just only symmetry is not enough.
But something like symmetry will be there?
Absolutely, symmetry will be there.
Level of symmetry will be there.
And level of symmetry, anti-symmetry, diagonal, vertical.
I even don't know how you can use in different direction the idea of symmetry, it's very general.
But it will be there.
I think that people are very sensitive to the idea of symmetry.
But there are several ideas like symmetry.
As I would like to learn.
But you cannot learn just thinking about that.
You should do challenging problems and then analyzing why it was able to solve them.
And then you will see.
Very simple things, it's not easy to find.
Even with talking about this every time.
I was surprised, I tried to understand.
Is people describe in language strong convergence mechanism for learning?
I did not see, I don't know.
But we convergence, this dark story and story like that, when you will explain to kids,
you will use we convergence argument.
It looks like it does like it does that.
But when you try to formalize, you're just ignoring this.
Why?
Why 50 years from start of machine learning?
And that's the role of philosophy.
I think that maybe, I don't know, maybe this is theory also.
We should blame for that because empirical risk minimization and all this stuff.
And if you read now textbooks, they just about bound about empirical risk minimization.
They don't looking for another problem like admissible set.
But on the topic of life, perhaps we, you could talk in Russian for a little bit.
What's your favorite memory from childhood?
What is your favorite memory from childhood?
Music.
Can you try to answer in Russian?
Music.
It was very cool when...
Such music.
Classical music.
What's your favorite?
Different composers.
At first it was Evaldia, I was surprised that it was possible.
And then when I understood Bach, I was completely shocked.
By the way, from him, I think that there are predicates, like structures.
In Bach, of course.
Because you just feel the structure here.
And I don't think that different elements of life are strongly divided in the sense of predicates.
Structure in the life, structure in human relationships, structure.
How to find these high-level predicates.
In Bach and in life.
Everything is connected.
Now that we're talking about Bach, let's switch back to English.
Because I like Beethoven and Chopin, so...
Chopin is another music story.
But Bach, if we talk about predicates, Bach probably has the most well-defined predicates under life.
It is very interesting to read what critics are writing about Bach.
Which words are they using?
They're trying to describe predicates.
And then Chopin, it is very different vocabulary.
Very different predicates.
And I think that if you will make a collection of that.
So maybe from this you can describe predicates for digital recognition.
Well...
From Bach and Chopin.
No, no, no. Not from Bach and Chopin.
From the critic interpretation of the music.
When they're trying to explain new music.
What they use.
They describe high-level ideas of Platos ideas.
What's behind this music.
That's brilliant.
So art is not self-explanatory in some sense.
So you have to try to convert it into ideas explicitly.
When you go from ideas to the representation, it is an easy way.
But when you're trying to go Bach, it is ill-posed problems.
But nevertheless, I believe that when you're looking from that, even from art,
you will be able to find predicates for digital recognition.
That's such a fascinating and powerful notion.
Do you ponder your own mortality?
Do you think about it?
Do you fear it?
Do you draw insight from it?
About mortality?
No, yeah.
Are you afraid of death?
Not too much.
Not too much.
It is pity that I will not be able to do something which I think.
I have a feeling to do that.
For example, I will be very happy to work with this guy's tradition from music
to write this collection of descriptions of how they describe music,
how they use what predicates.
And from art as well.
Then take what is in common.
Try to understand, predicate.
This is absolute for everything.
And then use that for visual recognition and see if there is a connection.
Exactly.
There's still time. We've got time.
We've got time.
It takes years and years and years.
It's a long way.
See, you've got the patient mathematician's mind.
I think it could be done very quickly and very beautifully.
I think it's a really elegant idea.
Yeah, but also some of many.
You know, the most time it is not to make this collection to understand
what is the common to think about that once again and again and again.
Again and again and again.
But I think sometimes, especially just when you say this idea now,
even just putting together the collection and looking at the different sets of data,
language, trying to interpret music, criticize music and images.
I think there will be sparks of ideas that will come.
Of course, again and again, you'll come up with better ideas.
But even just that notion is a beautiful notion.
I even have some example.
So I have a friend who was a specialist in Russian poetry.
She is a professor of Russian poetry.
He did not write poems, but she knows a lot of stuff.
She makes books, several books, and one of them is a collection of Russian poetry.
She has images of Russian poetry.
She collects all images of Russian poetry.
And I ask you to do the following.
You have NIPS, digit recognition, and we get 100 digits, or maybe less than 100.
I don't remember, maybe 50 digits.
And try from a poetical point of view, describe every image you see,
using only words of images of Russian poetry.
And she did it.
And then we tried to...
I call it learning using privileged information.
I call it privileged information.
You have on two languages.
One language is just image of digit.
And another language is a description of this image.
And this is privileged information.
And there is an algorithm when you're working using privileged information,
you're doing better.
Much better.
So there's something there?
Something there.
And there is an NEC, she unfortunately died,
the collection of digits in poetic descriptions of these digits.
So there's something there in that poetic description.
But I think that there is an abstract idea on the plateau level of ideas.
Yeah, they're there.
That could be discovered.
And music seems to be a good entry point.
But as soon as we start with this challenge problem...
The challenge problem?
It immediately connected to all this stuff.
Especially with your talk and this podcast and I'll do whatever I can to advertise it.
It's such a clean, beautiful, Einstein-like formulation of the challenge before us.
All right.
Let me ask another absurd question.
We talked about mortality.
We talked about philosophy of life.
What do you think is the meaning of life?
What's the predicate for mysterious existence here on Earth?
I don't know.
It's very interesting.
We have in Russia, I don't know, you know, the guy Strugatsky.
They are writing fictitious, they're thinking about human, what's going on.
And they have an idea that they're developing two types of people,
common people and very smart people.
They just started.
And these two branches of people will go in different directions very soon.
So that's what they're thinking about.
So the purpose of life is to create two paths of human societies.
Yes.
Simple people and more complicated people.
Which do you like best?
The simple people or the complicated ones?
I don't know.
He's just his fantasy.
You know, every week we have a guy who is just a writer and also a theoretic of literature.
And he explains how he understands literature and human relationship, how he sees life.
And I understood that I'm just small kids comparing to him.
He's a very smart guy in understanding life.
He knows this predicate, he knows big blocks of life.
I am used every time when I listen to him.
And he's just talking about literature.
And I think that I was surprised.
So the managers in big companies, most of them are guys who study English language and English literature.
So why?
Because they understand life.
They understand models.
And among them may be many talented critics who are just analyzing this.
And this is big science like prop did.
These are these blocks.
They are very smart.
It amazes me that you are and continue to be humbled by the brilliance of others.
I am very modest about myself.
I see so smart guys around.
Well, let me be immodest for you.
You're one of the greatest mathematicians, statisticians of our time.
It's truly an honor.
Thank you for talking again.
And let's talk.
It is not.
Yes, I know my limits.
Let's talk again when your challenge is taken on and solved by a grad student.
Let's talk again when they use it.
Maybe music will be involved.
Vladimir, thank you so much.
Thank you very much.
Thanks for listening to this conversation with Vladimir Vapnik.
And thank you to our presenting sponsor, Cash App.
Download it, use code LEX Podcast.
You'll get $10 and $10 will go to first, an organization that inspires and educates young minds to become science and technology innovators of tomorrow.
If you enjoy this podcast, subscribe on YouTube, give us five stars on Apple Podcasts, support on Patreon, or simply connect with me on Twitter at Lex Freedman.
And now, let me leave you with some words from Vladimir Vapnik.
When solving a problem of interest, do not solve a more general problem as an intermediate step.
Thank you for listening.
I hope to see you next time.