This graph shows how many times the word ______ has been mentioned throughout the history of the program.
The following is a conversation with Keoki Jackson.
He's the CTO of Lockheed Martin,
a company that through his long history
has created some of the most incredible engineering
marvels human beings have ever built,
including planes that fly fast and undetected,
defense systems that intersect nuclear threats that
can take the lives of millions, and systems that venture out
into space, the moon, Mars, and beyond.
And these days, more and more, artificial intelligence
has an assistive role to play in these systems.
I've read several books in preparation
for this conversation.
It is a difficult one, because in part,
Lockheed Martin builds military systems
that operate in a complicated world that often does not
have easy solutions in the gray area between good and evil.
I hope one day this world will rid itself of war
in all its forms.
But the path to achieving that in a world that
does have evil is not obvious.
What is obvious is good engineering
and artificial intelligence research
has a role to play on the side of good.
Lockheed Martin and the rest of our community
are hard at work at exactly this task.
We talk about these and other important topics
in this conversation.
Also, most certainly, both Kiyoki and I have a passion for space,
us humans venturing out toward the stars.
We talk about this exciting future as well.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe on YouTube,
give it five stars on iTunes, support it on Patreon,
or simply connect with me on Twitter
at Lex Freedman, spelled F-R-I-D-M-A-N.
And now, here's my conversation with Kiyoki Jackson.
I read several books on Lockheed Martin recently.
My favorite, in particular, is by Ben Rich,
called Skonkork's personal memoir.
It gets a little edgy at times.
But from that, I was reminded that the engineers of Lockheed
Martin have created some of the most incredible engineering
marvels human beings have ever built throughout the 20th
century and the 21st.
Do you remember a particular project or system at Lockheed
or before that at the Space Shuttle Columbia
that you were just in awe at the fact
that us humans could create something like this?
That's a great question.
There's a lot of things that I could draw on there.
When you look at the Skonkorks and Ben Rich's book,
in particular, of course, it starts off
with basically the start of the jet age and the P-80.
I had the opportunity to sit next to one of the Apollo
astronauts, Charlie Duke, recently at dinner.
And I said, hey, what's your favorite aircraft?
And he said, well, it was by far the F-104 Starfighter,
which was another aircraft that came out of Lockheed there.
What kind of?
It was the first Mach 2 jet fighter aircraft.
They called it the missile with a man in it.
And so those are the kinds of things
I grew up hearing stories about.
Of course, the SR-71 is incomparable
as kind of the epitome of speed, altitude, and just
the coolest-looking aircraft ever.
So there's reconnaissance as a plane.
That's intelligence surveillance and reconnaissance
aircraft that was designed to be able to outrun, basically,
go faster than any air defense system.
But I'll tell you, I'm a space junkie.
That's why I came to MIT.
That's really what took me, ultimately, to Lockheed Martin.
And I grew up, and so Lockheed Martin, for example,
has been essentially at the heart of every planetary mission,
like all the Mars missions we've had a part in.
And we've talked a lot about the 50th anniversary of Apollo
here in the last couple of weeks, right?
But remember, 1976, July 20, again, the national space
day, so the landing of the Viking lander on the surface
of Mars, just a huge accomplishment.
And when I was a young engineer at Lockheed Martin,
I got to meet engineers who had designed various pieces
of that mission as well.
So that's what I grew up on, is these planetary missions,
the start of the space shuttle era,
and ultimately had the opportunity
to see Lockheed Martin's part.
And we can maybe talk about some of these here,
but Lockheed Martin's part in all of these space journeys
over the years.
Do you dream, and I apologize for getting philosophical at times,
or sentimental, I do romanticize the notion
of space exploration.
So do you dream of the day when us humans colonize
another planet, like Mars, or a man, a woman, a human being,
steps on Mars?
Absolutely.
And that's a personal dream of mine.
I haven't given up yet on my own opportunity
to fly into space, but from the Lockheed Martin perspective,
this is something that we're working towards every day.
And of course, we're building the Orion spacecraft, which
is the most sophisticated human-rated spacecraft ever
built, and it's really designed for these deep space journeys,
starting with the moon, but ultimately going to Mars,
and being the platform from a design perspective,
we call the Mars Base Camp, to be able to take humans
to the surface, and then after a mission of a couple of weeks,
bring them back up safely.
And so that is something I want to see happen during my time
at Lockheed Martin.
So I'm pretty excited about that.
And I think once we prove that's possible,
colonization might be a little bit further out,
but it's something that I'd hope to see.
So maybe you can give a little bit
of an overview of, so Lockheed Martin
has partnered with a few years ago with Boeing
to work with the DoD and NASA to build launch systems
and rockets with the ULA.
What's beyond that?
What's Lockheed's mission, timeline, and long-term dream
in terms of space?
You mentioned the moon.
I've heard you talk about asteroids as Mars.
What's the timeline?
What's the engineering challenges?
And what's the dream long-term?
Yeah, I think the dream long-term is
to have a permanent presence in space beyond low Earth
orbit, ultimately with a long-term presence on the moon
and then to the planets to Mars.
And I'm sorry to interrupt on that.
So long-term presence means sustained and sustainable
presence in an economy, a space economy,
that really goes alongside that.
With human beings and being able to launch perhaps
from those, so like hop, there's a lot of energy that
goes in those hops, right?
So I think the first step is being able to get there
and to be able to establish sustained basis, right,
and build from there.
And a lot of that means getting, as you know,
things like the cost of launch down.
And you mentioned United Launch Alliance.
And so I don't want to speak for ULA,
but obviously they're working really
hard on their next generation of launch vehicles
to maintain that incredible mission success record
that ULA has, but ultimately continue to drive down
the cost and make the flexibility, the speed,
and the access ever greater.
So what's the missions that are in the horizon
that you could talk to?
Is there a hope to get to the moon?
Absolutely, absolutely.
I mean, I think you know this, or you
may know this, there's a lot of ways
to accomplish some of these goals.
And so that's a lot of what's in discussion today.
But ultimately, the goal is to be able to establish a base,
essentially in CIS lunar space that
would allow for ready transfer from orbit
to the lunar surface and back again.
And so that's sort of that near term,
I say near term in the next decade or so vision.
Starting off with a stated objective
by this administration to get back to the moon
in the 2024, 2025 time frame, which is right around the corner
here.
How big of an engineering challenge is that?
I think the big challenge is not so much to go, but to stay.
And so we demonstrated in the 60s
that you could send somebody up, do a couple of days of mission,
and bring them home again successfully.
Now we're talking about doing that,
I'd say more to, I don't want to say an industrial scale,
but a sustained scale.
So permanent habitation, regular reuse of vehicles,
the infrastructure to get things like fuel, air, consumables,
replacement parts, all the things that you need to sustain
that kind of infrastructure.
So those are certainly engineering challenges.
There are budgetary challenges.
And those are all things that we're
going to have to work through.
The other thing, and I shouldn't,
I don't want to minimize this.
I mean, I'm excited about human exploration,
but the reality is our technology
and where we've come over the last 40 years essentially
has changed what we can do with robotic exploration as well.
And to me, it's incredibly thrilling.
This seems like old news now, but the fact
that we have rovers driving around the surface of Mars
and sending back data is just incredible.
The fact that we have satellites in orbit around Mars
that are collecting weather, they're
looking at the terrain, they're mapping,
all these kinds of things on a continuous basis,
that's incredible.
And the fact that you got the time lag, of course,
going to the planets.
But you can effectively have virtual human presence there
in a way that we have never been able to do before.
And now with the advent of even greater processing power,
better AI systems, better cognitive systems
and decision systems, you put that together
with the human piece.
And we really opened up the solar system
in a whole different way.
And I'll give you an example.
We've got Osiris-Rex, which is a mission to the asteroid Benus.
So the spacecraft is out there right now on basically a year
mapping activity to map the entire surface of that asteroid
in great detail, all autonomously piloted, right?
But the idea then that, and this is not too far away,
it's going to go in.
It's got a sort of fancy vacuum cleaner with a bucket.
It's going to collect the sample off the asteroid
and then send it back here to Earth.
And so we have gone from sort of those tentative steps
in the 70s, early landings, video of the solar system.
So now we've sent spacecraft to Pluto.
We have gone to comets and brought and intercepted comets.
We've brought stardust, material back.
So we've gone far, and there's incredible opportunity
to go even farther.
So it seems quite crazy that this is even possible,
that can you talk a little bit about what it means
to orbit an asteroid and with a bucket
to try to pick up some soil samples?
Yeah.
So part of it is just kind of the,
these are the same kinds of techniques
we use here on Earth for high speed, high accuracy imagery,
stitching these scenes together, and creating essentially
high accuracy world maps.
And so that's what we're doing, obviously,
on a much smaller scale than asteroid.
But the other thing that's really interesting,
you put together sort of that neat control and data
and imagery problem.
But the stories around how we design the collection,
I mean, as essentially, this is sort
of the human ingenuity element, that essentially had
an engineer who had one day is like, well,
starts messing around with parts, vacuum cleaner, bucket.
Maybe we could do something like this.
And that was what led to what we call the Pogo stick
collection, where basically, I think, comes down.
It's only there for seconds does that collection,
grabs the, essentially blows the regolith material
into the collection hopper, and off it goes.
It doesn't really land, almost.
It's a very short landing.
Wow, that's incredible.
So what is in those, we talk a little bit more about space.
What's the role of the human in all of this?
What are the challenges?
What are the opportunities for humans
as they pilot these vehicles in space
and for humans that may step foot on either the moon or Mars?
Yeah, it's a great question, because I just
have been extolling the virtues of robotic and rovers,
autonomous systems.
And those absolutely have a role.
I think the thing that we don't know how to replace today
is the ability to adapt on the fly to new information.
And I believe that will come, but we're not there yet.
There's a ways to go.
And so you think back to Apollo 13
and the ingenuity of the folks on the ground and on the spacecraft
essentially cobbled together a way
to get the carbon dioxide scrubbers to work.
Those are the kinds of things that ultimately,
and I'd say not just from dealing with anomalies,
but dealing with new information.
You see something, and rather than waiting 20 minutes
or half an hour an hour to try to get information back
and forth, but be able to essentially
revect around the fly, collect different samples,
take a different approach, choose different areas to explore.
Those are the kinds of things that that human presence enables
that still weighs ahead of us on the AI side.
Yeah, there's some interesting stuff we'll talk about
on the teaming side here on Earth.
That's pretty cool to explore.
And in space, let's not leave the space piece out.
So what is teaming?
What does AI and humans working together in space look like?
Yeah, one of the things we're working on
is a system called Maya, which is you can think of it.
So it's an AI assistant.
And in space, exactly.
And you think of it as the Alexa in space, right?
But this goes hand in hand with a lot of other developments.
And so today's world, everything is essentially model-based,
model-based systems engineering to the actual digital tapestry
that goes through the design, the build, the manufacture,
the testing, and ultimately the sustainment of these systems.
And so our vision is really that when our astronauts are there
around Mars, you're going to have that entire digital library
of the spacecraft, of its operations, all the test data,
all the test data and flight data from previous missions
to be able to look and see if there are anomalous conditions
until the humans and potentially deal with that before it
becomes a bad situation and help the astronauts work
through those kinds of things.
And it's not just dealing with problems as they come up,
but also offering up opportunities
for additional exploration capability, for example.
So that's the vision is that these are,
take the best of the human to respond to changing circumstances
and rely on the best of AI capabilities
to monitor this almost infinite number of data points
and correlations of data points
that humans, frankly, aren't that good at.
So how do you develop systems in space like this,
whether it's Alexa in space or in general,
any kind of control systems, any kind of intelligent systems,
when you can't really test stuff too much out in space,
it's very expensive to test stuff.
So how do you develop such systems?
Yeah, that's the beauty of this digital twin, if you will.
And of course, with Lockheed Martin,
we've over the past five plus decades
been refining our knowledge of the space environment,
of how materials behave, dynamics, the controls,
the radiation environments, all of these kinds of things.
So we're able to create very sophisticated models.
They're not perfect, but they're very good.
And so you can actually do a lot.
I spent part of my career simulating communication
spacecraft, missile warning spacecraft, GPS spacecraft
in all kinds of scenarios and all kinds of environments.
So this is really just taking that to the next level.
The interesting thing is that now you're
bringing into that loop a system, depending
on how it's developed, that may be non-deterministic.
It may be learning as it goes.
In fact, we anticipate that it will be learning as it goes.
And so that brings a whole new level of interest, I guess,
into how do you do verification and validation
of these non-deterministic learning systems
in scenarios that may go out of the bounds or the envelope
that you have initially designed to.
So this system in its intelligence has the same complexity.
Some of the same complexity a human does.
And it learns over time.
It's unpredictable in certain kinds of ways.
So you also have to model that when you're thinking about it.
So in your thoughts, it's possible
to model the majority of situations,
the important aspects of situations here on Earth
and in space, enough to test stuff.
Yeah, this is really an active area of research.
And we're actually funding university research
in a variety of places, including MIT.
This is in the realm of trust and verification
and validation of, I'd say, autonomous systems in general.
And then as a subset of that, autonomous systems
that incorporate artificial intelligence capabilities.
And this is not an easy problem.
We're working with startup companies.
We've got internal R&D.
But our conviction is that autonomy and more and more
AI-enabled autonomy is going to be in everything
that Lockheed Martin develops and fields.
And autonomy and AI are going to be
retrofit into existing systems.
They're going to be part of the design
for all of our future systems.
And so maybe I should take a step back and say,
the way we define autonomy.
So we talk about autonomy essentially, a system
that composes, selects, and then executes decisions
with varying levels of human intervention.
And so you could think of no autonomy.
So this is essentially the human doing the task.
You can think of, effectively, partial autonomy
where the human is in the loop.
So making decisions in every case
about what the autonomous system can do.
Either in the cockpit or remotely.
Or remotely, exactly.
But still in that control loop.
And then there's what you'd call supervisory autonomy.
So the autonomous system is doing most of the work.
The human can intervene to stop it
or to change the direction.
And then ultimately, full autonomy
where the human is off the loop altogether.
And for different types of missions,
want to have different levels of autonomy.
So now take that spectrum and this conviction
that autonomy and more and more AI are in everything
that we develop.
The kinds of things that Lockheed Martin does a lot of times
are safety of life critical kinds of missions.
Think about aircraft, for example.
And so we require, and our customers require,
an extremely high level of confidence.
One, that we're going to protect life.
Two, that these systems will behave in ways
that their operators can understand.
And so this gets into that whole field.
Again, being able to verify and validate
that the systems have been, that they
will operate the way they're designed
and the way they're expected.
And furthermore, that they will do that in ways
that can be explained and understood.
And that is an extremely difficult challenge.
Yeah, so here's a difficult question.
I don't mean to bring this up, but I
think it's a good case study that people are familiar with.
Boeing 737 MAX commercial airplane
has had two recent crashes where their flight control
software system failed.
And it's software, so I don't mean to speak about Boeing.
But broadly speaking, we have this in the autonomous vehicle
space too, semi-autonomous.
When you have millions of lines of code software
making decisions, there is a little bit
of a clash of cultures, because software engineers don't
have the same culture of safety often.
That people who build systems like at Lockheed Martin
do where it has to be exceptionally safe.
You have to test this on.
So how do we get this right when software is making
so many decisions?
Yeah, and there's a lot of things that have to happen.
And by and large, I think it starts with the culture,
which is not necessarily something
that A is taught in school, or B is something that would come.
Depending on what kind of software you're developing,
it may not be relevant if you're targeting ads
or something like that.
And by and large, I'd say not just Lockheed Martin,
but certainly the aerospace industry as a whole
has developed a culture that does focus on safety,
safety of life, operational safety, mission success.
But as you note, these systems have
gotten incredibly complex.
And so they're to the point where it's almost impossible.
State spaces become so huge that it's impossible to or very
difficult to do a systematic verification
across the entire set of potential ways
that an aircraft could be flown, all the conditions that
could happen, all the potential failure scenarios.
Now, maybe that's soluble one day.
Maybe when we have our quantum computers at our fingertips,
we'll be able to actually simulate across an entire almost
infinite state space.
But today, there's a lot of work to really try
to bound the system to make sure that it behaves
in predictable ways, and then have
this culture of continuous inquiry and skepticism
and questioning to say, did we really consider
the right realm of possibilities?
Have we done the right range of testing?
Do we really understand, in this case, human and machine
interactions, the human decision process
alongside the machine processes?
And so that's that culture that we
call it the culture of mission success at Lockheed Martin
that really needs to be established.
And it's not something that people learn by living in it.
And it's something that has to be promulgated.
And it's done from the highest level.
So at a company of Lockheed Martin, like Lockheed Martin.
Yeah.
And the same as being faced with certain autonomous vehicle
companies where that culture is not there
because it started mostly by software engineers.
So that's what they're struggling with.
Is there lessons that you think we
should learn as an industry and a society from the Boeing 737
MAX crashes?
These crashes, obviously, are either tremendous tragedies.
They're tragedies for all of the people, the crew,
the families, the passengers, the people on the ground involved.
And it's also a huge business and economic setback as well.
I mean, we've seen that it's impacting essentially
the trade balance of the US.
So these are important questions.
And these are the kinds that we've
seen similar kinds of questioning at times.
You go back to the Challenger accident.
And it is, I think, always important to remind ourselves
that humans are fallible, that the systems we create
as perfect as we strive to make them,
we can always make them better.
And so another element of that culture of mission success
is really that commitment to continuous improvement.
If there's something that goes wrong,
a real commitment to root cause and true root cause
understanding, to taking the corrective actions
and to making the future systems better.
And certainly, we strive for no accidents.
And if you look at the record of the commercial airline
industry as a whole and the commercial aircraft
industry as a whole, there's a very nice decaying exponential
to years now where we have no commercial aircraft accidents
at all, our fatal accidents at all.
So that didn't happen by accident.
It was through the regulatory agencies,
FAA, the airframe manufacturers, really working on a system
to identify root causes and drive them out.
So maybe we can take a step back.
And many people are familiar.
But Lockheed Martin broadly, what kind of categories
of systems are you involved in building?
You know, Lockheed Martin, we think of ourselves
as a company that solves hard mission problems.
And the output of that might be an airplane or spacecraft
or a helicopter or radar or something like that.
But ultimately, we're driven by these, you know,
like what is our customer?
What is that mission that they need to achieve?
And so that's what drove the SR-71, right?
How do you get pictures of a place
where you've got sophisticated air defense systems that
are capable of handling any aircraft that
was out there at the time, right?
So that's what you'll do to an SR-71.
Build a nice flying camera.
Exactly.
And make sure it gets out and it gets back, right?
And that led ultimately to really the start of the space
program in the US as well.
So now take a step back to Lockheed Martin of today.
And we are on the order of 105 years old now
between Lockheed and Martin, the two big heritage companies.
Of course, we're made up of a whole bunch of other companies
that came in as well.
General Dynamics, kind of go down the list.
Today, you can think of us in this space of solving mission
problems.
So obviously on the aircraft side,
tactical aircraft building the most advanced fighter
aircraft that the world has ever seen.
We're up to now several hundred of those
delivered, building almost 100 a year.
And of course, working on the things that come after that.
On the space side, we are engaged in pretty much
every venue of space utilization and exploration
you can imagine.
So I mentioned things like navigation, timing, GPS,
communication satellites, missile warning satellites.
We've built commercial surveillance satellites.
We've built commercial communication satellites.
We do civil space.
So everything from human exploration
to the robotic exploration of the outer planets.
And keep going on the space front.
But a couple other areas I'd like to put out,
we're heavily engaged in building critical defensive
systems.
And so a couple that I'll mention, the Aegis Combat
System, this is basically the integrated air and missile
defense system for the US and Allied fleets.
And so protects carrier strike groups,
for example, from incoming ballistic missile threats,
aircraft threats, cruise missile threats,
and kind of go down the list.
So the carriers, the fleet itself
is the thing that is being protected.
The carriers aren't serving as a protection for something else.
Well, that's a little bit of a different application.
We've actually built a version called Aegis Assure,
which is now deployed in a couple of places around the world.
So that same technology, I mean, basically
can be used to protect either an ocean-going fleet
or a land-based activity.
Another one, the THAAD program.
So THAAD, this is the Theater High Altitude Area Defense.
This is to protect relatively broad areas
against sophisticated ballistic missile threats.
And so now it's deployed with a lot of US capabilities.
And now we have international customers
that are looking to buy that capability as well.
And so these are systems that defend not just
defend militaries and military capabilities,
but defend population areas.
We saw maybe the first public use of these back in the first Gulf
War with the Patriot systems.
And these are the kinds of things that Lockheed Martin delivers.
And there's a lot of stuff that goes with it.
So think about the radar systems and the sensing systems
that cue these, the command and control systems that decide
how you pair a weapon against an incoming threat,
and then all the human and machine interfaces
to make sure that they can be operated successfully
in very strenuous environments.
Yeah, there's some incredible engineering
that at every front, like you said.
So maybe if we just take a look at Lockheed history broadly,
maybe even looking at Skunk Works,
what are the biggest, most impressive milestones
of innovation?
So if you look at stealth, I would have called you crazy
if you said that's possible at the time.
And supersonic and hypersonic, so traveling at, first of all,
traveling at the speed of sound is pretty damn fast.
And supersonic and hypersonic 3, 4, 5 times the speed of sound,
that seems, I would also call you crazy
if you say you can do that.
So can you tell me how it's possible to do these kinds
of things, and is there other milestones and innovation
that's going on that you can talk about?
Yeah, well, let me start on the Skunk Works saga.
And you kind of alluded to it in the beginning.
Skunk Works is as much an idea as a place.
And so it's driven really by Kelly Johnson's 14 principles.
And I'm not going to list all 14 of them off.
But the idea in this, I'm sure, will
resonate with any engineer who's worked
on a highly motivated small team before.
The idea that if you can essentially
have a small team of very capable people who
want to work on really hard problems,
you can do almost anything, especially
if you kind of shield them from bureaucratic influences,
if you create very tight relationships with your customer
so that you have that team and shared vision with the customer.
Those are the kinds of things that
enable the Skunk Works to do these incredible things.
And we listed off a number that you brought up stealth.
And this whole, I wish I could have seen
Ben Rich with a ball bearing rolling across the desk
to a general officer and saying, would you
like to have an aircraft that has the radar cross section
of this ball bearing?
Probably one of the least expensive and most effective
marketing campaigns in the history of the industry.
So just for people not familiar, the way
you detect aircraft, I'm sure there's a lot of ways.
But radar, for the longest time, there's
a big blob that appears in the radar.
How do you make a plane disappear
so it looks as big as a ball bearing?
What's involved in technology-wise there?
What's broadly sort of the stuff you can speak about?
I'll stick to what's in Ben Rich's book.
Obviously, the geometry of how radar gets reflected
and the kinds of materials that either reflect or absorb
are kind of the couple of the critical elements there.
And it's a cat and mouse game, right?
I mean, radars get better, stealth capabilities get better.
And so it's a really game of continuous improvement
and innovation there.
I'll leave it at that.
Yeah, so the idea that something is essentially invisible
is quite fascinating.
But the other one is flying fast.
So speed of sound is 750, 60 miles an hour.
So supersonic is Mach 3, something like that.
Yeah, we talk about the supersonic, obviously.
And we kind of talk about that as that realm from Mach 1
up through about Mach 5.
And then hypersonic, so high supersonic speeds
would be past Mach 5.
And you got to remember, Lockheed, Martin, actually
other companies have been involved in hypersonic development
since the late 60s.
You think of everything from the X-15 to the space shuttle
as examples of that.
I think the difference now is if you look around the world,
particularly the threat environment that we're in today,
you're starting to see publicly folks
like the Russians and the Chinese saying
they have hypersonic weapons capability that
could threaten US and Allied capabilities.
And also, basically, the claims are
that these could get around defensive systems that
are out there today.
And so there's a real sense of urgency.
You hear it from folks like the Undersecretary of Defense
for Research and Engineering, Dr. Mike Griffin,
and others in the Department of Defense that hypersonics is
something that's really important to the nation
in terms of both parity, but also defensive capabilities.
And so that's something that we're pleased.
It's something that Lockheed Martin's had a heritage in.
We've invested R&D dollars on our side for many years.
And we have a number of things going on
with various US government customers in that field today
that we're very excited about.
So I would anticipate we'll be hearing more
about that in the future from our customers.
And I've actually haven't read much about this.
Probably you can't talk about much of it at all.
But on the defensive side, it's a fascinating problem
of perception, of trying to detect things that are really
hard to see.
Can you comment on how hard that problem is?
And how hard is it to stay ahead,
even if we're going back a few decades,
stay ahead of the competition?
Maybe I'd, again, you've got to think of these
as ongoing capability development.
And so think back to the early phase of missile defense.
So this would be in the 80s, the SDI program.
And in that timeframe, we proved, Lockheed Martin proved,
that you could hit a bullet with a bullet, essentially.
And which is something that had never been done before
to take out an incoming ballistic missile.
And so that's led to these incredible hit-to-kill kinds
of capabilities, PAC-3.
That's the Patriot Advanced Capability Model 3,
that Lockheed Martin builds, the THAAD system
that I talked about.
So now, hypersonics, they're different from ballistic systems.
And so we've got to take the next step
in defensive capability.
I'll leave that there, but I can only imagine.
Now, let me just comment.
So if it's an engineer, it's sad to know
that so much that Lockheed has done in the past
is classified, or today, and it's shrouded in secrecy.
It has to be by the nature of the application.
So what I do, so what we do here at MIT,
we would like to inspire young engineers, young scientists.
And yet, in the Lockheed case, some of that engineer
has to stay quiet.
How do you think about that?
How does that make you feel?
Is there a future where more can be shown?
Or is it just the nature of this world
that it has to remain secret?
It's a good question.
I think the public can see enough of, including students
who may be in grade school, high school, college today,
to understand the kinds of really hard problems
that we work on.
And I mean, look at the F-35, right?
And obviously, a lot of the detailed performance levels
are sensitive and controlled.
But we can talk about what an incredible aircraft this is.
Supersonic, super cruise, kind of a fighter,
stealth capabilities, it's a flying information system
in the sky with data fusion, sensor fusion capabilities
that have never been seen before.
So these are the kinds of things that I believe.
These are the kinds of things that got me excited
when I was a student.
I think these still inspire students today.
And the other thing, I mean, people are inspired by space.
People are inspired by aircraft.
Our employees are also inspired by that sense of mission.
And I'll just give you an example.
I had the privilege to work and lead our GPS programs
for some time.
And that was a case where I actually
worked on a program that touches billions of people every day.
And so when I said I worked on GPS,
everybody knew what I was talking about,
even though they didn't maybe appreciate the technical
challenges that went into that.
But I'll tell you, I got a briefing one time
from a major in the Air Force.
And he said, I go by call sign GIMP.
GPS is my passion.
I love GPS.
And he was involved in the operational test of the system.
He said, I was out in Iraq, and I was on a helicopter,
Blackhawk helicopter.
And I was bringing back a sergeant and a handful of troops
from a deployed location.
And he said, my job is GPS.
So I asked that sergeant.
And he's beating down and half asleep.
And I said, what do you think about GPS?
And he brightened up his eyes lit up.
And he said, well, GPS, that brings
me and my troops home every day.
I love GPS.
And that's the kind of story where it's like, OK,
I'm really making a difference here in the kind of work.
So that mission piece is really important.
The last thing I'll say is, and this
gets to some of these questions around advanced technologies,
it's not, they're not just airplanes and spacecraft
anymore.
For people who are excited about advanced software capabilities,
about AI, about bringing machine learning,
these are the things that we're doing to exponentially
increase the mission capabilities that go on those platforms.
And those are the kinds of things that I think
are more and more visible to the public.
Yeah, I think autonomy, especially in flight,
is super exciting.
Do you see a day, here we go, back into philosophy,
future when most fighter jets will be highly autonomous
to a degree where a human doesn't need to be in the cockpit
in almost all cases?
Well, I mean, that's a world that to a certain extent
we're in today.
Now, these are remotely piloted aircraft, to be sure.
But we have hundreds of thousands of flight hours a year now
in remotely piloted aircraft.
And then if you take the F-35, there are huge layers,
I guess, in levels of autonomy built into that aircraft,
so that the pilot is essentially more of a mission manager
rather than doing the data, the second to second,
the elements of flying the aircraft.
So in some ways, it's the easiest aircraft in the world
to fly.
And kind of a funny story on that.
So I don't know if you know how aircraft carrier landings work.
But basically, there's what's called a tail hook,
and it catches wires on the deck of the carrier.
And that's what brings the aircraft to a screeching halt.
And there's typically three of these wires.
So if you miss the first, the second one,
you catch the next one, right?
And we got a little criticism.
I don't know how true this story is,
but we got a little criticism.
The F-35 is so perfect, it always gets the second wires.
We're wearing out the wire because it always hits that one.
So that's the kind of autonomy that essentially uplevels
what the human is doing to more of that mission manager.
So much of that landing by the F-35 is autonomous.
Well, it's just the control systems
are such that you really have dialed out the variability that
comes with all the environmental conditions.
You're wearing it out.
So my point is, to a certain extent, that world is here today.
Do I think that we're going to see a day anytime soon
when there are no humans in the cockpit?
I don't believe that.
But I do think we're going to see much more human machine
teaming, and we're going to see that much more
at the tactical edge.
And we did a demo.
You asked about what the Skunkworks is doing these days.
And so this is something I can talk about.
But we did a demo with the Air Force Research Laboratory.
We called it HAVRAIDER.
And so using an F-16 as an autonomous wingman,
and we demonstrated all kinds of maneuvers
and various mission scenarios with the autonomous F-16
being that so-called loyal or trusted wingman.
And so those are the kinds of things
that we've shown what is possible now.
Given that you've upleveled that pilot to be a mission
manager, now they can control multiple other aircraft
and can almost as extensions of your own aircraft
flying alongside with you.
So that's another example of how this is really
coming to fruition.
And then I mentioned the landings.
But think about just the implications
for humans and flight safety.
And this goes a little bit back to the discussion
we were having about how do you continuously
improve the level of safety through automation
while working through the complexities
that automation introduces.
So one of the challenges that you have in high performance
fighter aircraft is what's called G-LOC.
So this is G-induced loss of consciousness.
So you pull 9Gs, you're wearing a pressure suit,
that's not enough to keep the blood going to your brain,
you black out.
And of course that's bad if you happen
to be flying low near the deck and in an obstacle
or terrain environment.
And so we developed a system in our aeronautics division
called AutoGCAS, so Autonomous Ground Collision Avoidance
System.
And we built that into the F-16.
It's actually saved seven aircraft, eight pilots already.
And relatively short time it's been deployed.
It was so successful that the Air Force said, hey,
we need to have this in the F-35 right away.
So we've actually done testing of that now on the F-35.
And we've also integrated an Autonomous Air Collision
Avoidance System.
So I think the air-to-air problem.
So now it's the integrated collision avoidance system.
But these are the kinds of capabilities.
I wouldn't call them AI.
I mean, they're very sophisticated models
of the aircraft's dynamics coupled with the terrain models
to be able to predict when essentially the pilot is
doing something that is going to take the aircraft into,
or the pilot's not doing something in this case.
But it just gives you an example of how autonomy can be really
a lifesaver in today's world.
It's like Autonomous Automated Emergency Braking in cars.
But is there any exploration of perception
of, for example, detecting a G-LOC that the pilot is out,
so as opposed to perceiving the external environment
to infer that the pilot is out, but actually perceiving
the pilot directly?
Yeah, this is one of those cases where
you'd like to not take action if you think the pilot's there.
And it's almost like systems that
try to detect if a driver is falling asleep on the road,
right, with limited success.
So I mean, this is what I call the system of last resort,
right, where if the aircraft has determined
that it's going into the terrain, get it out of there.
And this is not something that we're just
doing in the aircraft world.
And I wanted to highlight, we have a technology we call
Matrix, but this is developed at Sikorsky Innovations.
The whole idea there is what we call optimal piloting,
so not optional piloting or unpiloted, but optimal piloting.
So an FAA-certified system, so you
have a high degree of confidence.
It's generally pretty deterministic,
so we know that it'll do in different situations.
But effectively, be able to fly a mission with two pilots,
one pilot, no pilots.
And you can think of it almost as like a dial
of the level of autonomy that you want,
but able, so it's running in the background at all times,
and able to pick up tasks, whether it's
sort of autopilot kinds of tasks or more sophisticated path
planning kinds of activities, to be
able to do things like, for example, land
on an oil rig in the North Sea in bad weather 00 conditions.
And you can imagine, of course, there's
a lot of military utility to capability like that.
You could have an aircraft that you
want to send out for a crewed mission,
but then at night, if you want to use it to deliver supplies
in an unmanned mode, that could be done as well.
And so there's clear advantages there.
But think about on the commercial side,
if you're an aircraft taken, you're
going to fly out to this oil rig.
And if you get out there and you can't land,
then you've got to bring all those people back,
reschedule another flight, pay the overtime for the crew
that you just brought back because they didn't get where
they were going, pay for the overtime for the folks that
are out there on the oil rig.
This is real economic.
These are dollars and cents kinds of advantages
that we're bringing in the commercial world as well.
So this is a difficult question from the AI space
that I would love it if we were able to comment.
So a lot of this autonomy in AI you've mentioned just now
has this empowering effect.
One is the last resort.
It keeps you safe.
The other is there's with the teaming and in general,
assistive AI.
And I think there's always a race.
So the world is complex.
It's full of bad actors.
So there's often a race to make sure
that we keep this country safe.
But with AI, there is a concern that it's
a slightly different race.
There's a lot of people in the AI space
that are concerned about the AI arms race.
That as opposed to the United States
becoming having the best technology and therefore
keeping us safe, even we lose ability
to keep control of it.
So the AI arms race getting away from all of us humans.
So do you share this worry?
Do you share this concern when we're
talking about military applications
that too much control and decision making
capabilities giving to software or AI?
Well, I don't see it happening today.
And in fact, this is something from a policy perspective.
It's obviously a very dynamic space.
But the Department of Defense has put quite a bit of thought
into that.
And maybe before talking about the policy,
I'll just talk about some of the why.
And you alluded to it being sort of a complicated and a little
bit scary world out there.
But there's some big things happening today.
You hear a lot of talk now about a return to great powers
competition, particularly around China and Russia with the US.
But there are some other big players out there as well.
And what we've seen is the deployment of some very,
I'd say, concerning new weapons systems, particularly
with Russia and breaching some of the IRBM, Intermediate
Range Ballistic Missile Treaties that's
been in the news a lot, the building of islands,
artificial islands in the South China Sea by the Chinese.
And then arming those islands, the annexation of Crimea
by Russia, the invasion of Ukraine.
So there's some pretty scary things.
And then you add on top of that, the North Korean threat has
certainly not gone away.
There's a lot going on in the Middle East with Iran in particular.
And we see this global terrorism threat has not abated, right?
So there are a lot of reasons to look for technology
to assist with those problems, whether it's AI or other
technologies like hypersonages, which we discussed.
So now, let me give just a couple of hypotheticals.
So people react sort of in the second time frame, right?
You know, your photon hitting your eye to a movement
is on the order of a few tenths of a second
kinds of processing times.
Roughly speaking, computers are operating
in the nanosecond time scale, right?
So just to bring home what that means,
a nanosecond to a second is like a second to 32 years.
So seconds on the battlefield, in that sense,
literally are lifetimes.
And so if you can bring an autonomous or AI-enabled
capability that will enable the human to shrink,
well, maybe you've heard the term the OOTA loop.
So this whole idea that a typical battlefield decision
is characterized by observe.
So information comes in, orient.
How does that, what does that mean in the context?
Decide, what do I do about it?
And then act, take that action.
If you can use these capabilities
to compress that OOTA loop to stay inside what
your adversary is doing, that's an incredible, powerful force
on the battlefield.
That's a really nice way to put it.
That the role of AI in computing in general
has a lot to benefit from just decreasing from 32 years
to one second, as opposed to on the scale of seconds
and minutes and hours making decisions
that humans are better at making.
And it actually goes the other way, too.
So that's on the short time scale.
So humans kind of work in the one second, two seconds
to eight hours.
After eight hours, you get tired, you got to go to the bathroom,
whatever the case might be.
So there's this whole range of other things.
Think about surveillance and guarding facilities.
Think about moving material, logistics, sustainment.
A lot of these what they call dull, dirty, and dangerous
things that you need to have sustained activity,
but it's sort of beyond the length of time
that a human can practically do as well.
So there's this range of things that
are critical in military and defense applications
that AI and autonomy are particularly well suited to.
Now, the interesting question that you brought up
is, OK, how do you make sure that stays within human control?
So that was the context for now the policy.
And so there is a DOD directive called 3,000.09,
because that's the way we name stuff in this world.
And I'd say it's well worth reading.
It's only a couple pages long, but it makes some key points.
And it's really around making sure
that there's human agency and control over use
of semi-autonomous and autonomous weapons systems,
making sure that these systems are tested, verified,
and evaluated in realistic, real-world-type scenarios,
making sure that the people are actually
trained on how to use them, making sure
that the systems have human-machine interfaces that
can show what state they're in and what kinds of decisions
they're making, making sure that you
establish doctrine and tactics and techniques
and procedures for the use of these kinds of systems.
And so, and by the way, I mean, none of this is easy,
but I'm just trying to lay kind of the picture of how
the US has said, this is the way we're
going to treat AI and autonomous systems,
that it's not a free-for-all.
And like there are rules of war and rules of engagement
with other kinds of systems, think chemical weapons,
biological weapons, we need to think
about the same sorts of implications.
And this is something that's really important for Lockheed
Martin.
Obviously, we are 100% complying with our customer
and the policies and regulations.
But I mean, AI is an incredible enabler, say,
within the walls of Lockheed Martin
in terms of improving production efficiency,
helping engineers doing generative design,
improving logistics, driving down energy costs.
I mean, there are so many applications.
But we're also very interested in some
of the elements of ethical application
within Lockheed Martin.
So we need to make sure that things like privacy is taken
care of, that we do everything we can to drive out
bias in AI-enabled kinds of systems,
that we make sure that humans are involved in decisions
that we're not just delegating accountability to algorithms.
And so for us, I talked about culture before,
and it comes back to the Lockheed Martin culture
and our core values.
And so it's pretty simple for us to do what's right,
respect others, perform with excellence.
And now, how do we tie that back to the ethical principles
that will govern how AI is used within Lockheed Martin?
And we actually have a world, so you might not know this,
but they're actually awards for ethics programs.
Lockheed Martin's had a recognized ethics program
for many years, and this is one of the things
that our ethics team is working with our engineering team on.
One of the miracles to me, perhaps a layman,
again, I was born in the Soviet Union,
so I have echoes, at least in my family history of World War
II and the Cold War, do you have a sense
of why human civilization has not destroyed itself
through nuclear war, so nuclear deterrence?
And thinking about the future, this technology
of our role to play here, and what
is the long-term future of nuclear deterrence look like?
Yeah, this is one of those hard, hard questions.
And I should note that Lockheed Martin is both proud
and privileged to play a part in multiple legs
of our nuclear and strategic deterrent systems
like the Trident submarine launch ballistic missiles.
You talk about, is there still a possibility
that human race could destroy itself?
I'd say that possibility is real, but interestingly,
in some sense, I think the strategic deterrence
have prevented the kinds of incredibly destructive world
wars that we saw in the first half of the 20th century.
Now, things have gotten more complicated since that time
and since the Cold War.
It is more of a multipolar, great powers world today.
Just to give you an example, back then,
there were in the Cold War timeframe
just a handful of nations that had ballistic missile
capability.
By last count, and this is a few years old,
there's over 70 nations today that have that,
similar kinds of numbers in terms of space-based capabilities.
So the world has gotten more complex and more challenging,
and the threats, I think, have proliferated in ways
that we didn't expect.
The nation today is in the middle
of a recapitalization of our strategic deterrent.
I look at that as one of the most important things
that our nation can do.
What is involved in deterrence?
Is it being ready to attack, or is it
the defensive systems that catch attacks?
A little bit of both.
And so it's a complicated game theoretical kind of program.
But ultimately, we are trying to prevent the use
of any of these weapons.
And the theory behind prevention is
that even if an adversary uses a weapon against you,
you have the capability to essentially strike back
and do harm to them that's unacceptable.
And so that will deter them from making use
of these weapons systems.
The deterrence calculus has changed, of course,
with more nations now having these kinds of weapons.
But I think from my perspective, it's
very important to maintain a strategic deterrent.
You have to have systems that you will know will work
when they're required to work.
And you know that they have to be
adaptable to a variety of different scenarios
in today's world.
And so that's what this recapitalization of systems
that were built over previous decades,
making sure that they are appropriate not just for today,
but for the decades to come.
So the other thing I'd really like to note
is strategic deterrence has a very different character today.
We used to think of weapons of mass destruction
in terms of nuclear, chemical, biological.
And today we have a cyber threat.
We've seen examples of the use of cyber weaponry.
And if you think about the possibilities
of using cyber capabilities or an adversary attacking the US
to take out things like critical infrastructure,
electrical grids, water systems, those
are scenarios that are strategic in nature
to the survival of a nation as well.
So that is the kind of world that we live in today.
And part of my hope on this is one
that we can also develop technological systems,
perhaps enabled by AI and autonomy,
that will allow us to contain and to fight back
against these kinds of new threats that were not
conceived when we first developed our strategic deterrence.
Yeah, I know that Lockheed is involved in cyber.
So I saw that you mentioned that.
It's an incredible change.
Nuclear almost seems easier than cyber,
because there's so many ways that cyber can evolve
in such an uncertain future.
But talking about engineering with a mission,
I mean, in this case, your engineering systems
that basically save the world.
Well, like I said, we're privileged to work
on some very challenging problems
for very critical customers here in the US
and with our allies abroad as well.
Lockheed builds both military and non-military systems.
And perhaps the future of Lockheed
may be more in non-military applications
if you talk about space and beyond.
I say that as a preface to a difficult question.
So President Eisenhower in 1961 in his farewell address
talked about the military industrial complex
and that it shouldn't grow beyond what is needed.
So what are your thoughts on those words
on the military industrial complex,
on the concern of growth of their developments
beyond what may be needed?
That what may be needed is a critical phrase, of course.
And I think it is worth pointing out, as you noted,
that Lockheed Martin, we're in a number of commercial businesses
from energy to space to commercial aircraft.
And so I wouldn't neglect the importance
of those parts of our business as well.
I think the world is dynamic and there was a time
it doesn't seem that long ago to me.
I was a graduate student here at MIT
and we were talking about the peace
dividend at the end of the Cold War.
If you look at expenditure on military systems
as a fraction of GDP, we're far below peak levels of the past.
And to me, at least, it looks like a time
where you're seeing global threats changing in a way that
would warrant relevant investments in defensive capabilities.
The other thing I'd note, for military and defensive systems,
it's not quite a free market, right?
We don't sell to people on the street.
And that warrants a very close partnership
between, I'd say, the customers and the people
that design, build, and maintain these systems.
Because of the very unique nature,
the very difficult requirements, the very great importance
on safety and on operating the way they're intended every time.
And so that does create, and it's frankly
one of Lockheed Martin's great strengths
is that we have this expertise built up
over many years in partnership with our customers
to be able to design and build these systems that
meet these very unique mission needs.
Yeah, because building those systems very costly,
there's very little room for mistake.
I mean, it's just Ben Rich's book and so on just tells
the story.
It's nowhere I can just reading it.
If you're an engineer, it reads like a thriller.
OK.
Let me, let's go back to space for a second.
I guess.
I'm always happy to go back to space.
So a few quick, maybe out there, maybe fun questions,
maybe a little provocative.
What are your thoughts on the efforts of the new folks,
SpaceX and Elon Musk?
What are your thoughts about what Elon is doing?
Do you see him as competition?
Do you enjoy competition?
What are your thoughts?
First of all, certainly Elon, I would say SpaceX
and some of his other ventures are definitely
a competitive force in the space industry.
And do we like competition?
Yeah, we do.
And we think we're very strong competitors.
I think it's competition is what the US is founded on
in a lot of ways and always coming up with a better way.
And I think it's really important to continue
to have fresh eyes coming in, new innovation.
I do think it's important to have level playing fields.
And so you want to make sure that you're not
giving different requirements to different players.
But I tell people, I spent a lot of time at places like MIT.
I'm going to be at the MIT Beaver Works Summer Institute
over the weekend here.
And I tell people, this is the most
exciting time to be in the space business in my entire life.
And it is this explosion of new capabilities
that have been driven by things like the massive increase
in computing power, things like the massive increase
in comms capabilities, advanced and additive manufacturing
are really bringing down the barriers to entry
in this field.
And it's driving just incredible innovation.
It's happening at startups, but it's also happening
at Lockheed Martin.
I may not realize this, but Lockheed Martin working
with Stanford actually built the first cubes that
was launched here out of the US that was called Quakesat.
And we did that with Stellar Solutions.
This was right around just after 2000, I guess.
And so we've been in that from the very beginning.
And I talked about some of these like Maya and Orion,
but we're in the middle of what we call smartsats
and software-defined satellites that can essentially
restructure and remap their purpose, their mission
on orbit to give you almost unlimited flexibility
for these satellites over their lifetimes.
So those are just a couple of examples.
But yeah, this is a great time to be in space.
Absolutely.
So Wright Brothers flew for the first time 116 years ago.
So now we have supersonic stealth planes
and all the technology we've talked about.
What innovations, obviously you can't predict the future,
but do you see Lockheed in the next 100 years?
If you take that same leap, how will the world
of technology engineering change?
I know it's an impossible question,
but nobody could have predicted that we could even
fly 120 years ago.
So what do you think is the edge of possibility
that we're going to be exploring in the next 100 years?
I don't know that there is an edge.
We've been around for almost that entire time, right?
The Lockheed Brothers and Glenn L. Martin
starting their companies in the basement of a church
and an old service station.
We're very different companies today
than we were back then, right?
And that's because we've continuously
reinvented ourselves over all of those decades.
I think it's fair to say, I know this for sure,
the world of the future, it's going to move faster,
it's going to be more connected,
it's going to be more autonomous,
and it's going to be more complex than it is today.
And so this is the world as a CTO of Lockheed Martin
that I think about, what are the technologies
that we have to invest in?
Whether it's things like AI and autonomy,
you can think about quantum computing,
which is an area that we've invested in
to try to stay ahead of these technological changes
and frankly, some of the threats that are out there.
And I believe that we're going to be out there
in the solar system, that we're going to be defending
and defending well against probably military threats
that nobody has even thought about today.
We are going to be, we're going to use these capabilities
to have far greater knowledge of our own planet,
the depths of the oceans, you know,
all the way to the upper reaches of the atmosphere
and everything out to the sun
and to the edge of the solar system.
So that's what I look forward to.
And I'm excited, I mean, just looking ahead
in the next decade or so to the steps
that I see ahead of us in that time.
I don't think there's a better place to end.
Okay, thank you so much.
Lex, it's been a real pleasure and sorry,
it took so long to get up here,
but glad we were able to make it happen.