0:03
SNI Digital, Innovations in Learning, in association with the UCLA
0:13
Department of Neurosurgery, Lindy Leow,
0:17
the chairwoman, and its faculty are pleased to bring to you the UCLA Department of Neurosurgery 101 lecture series on neurosurgery and clinical and basic neuroscience.
0:34
This series of lectures are provided free to bring advances in clinical and basic neuroscience to physicians and patients everywhere.
0:45
One out of every five people in the world suffer from a neurologically related disease
0:51
This lecture in discussion will be on the topic of artificial intelligence and neurosurgery.
0:59
And it will be given by Richard Byrne. The Roger Seabone Presidential Professor and Chairman, Department of Neurosurgery, Rush University Medical Center in Chicago, Illinois.
1:21
I realize some of you have to get to the operating room but this one is just pure fun for me and I hope it'll be fun for you. I'm trying to inspire residents given the resources that you have here at
1:32
UCLA to be involved in artificial intelligence research and be involved in this next wave to come here. These are my disclosures.
1:44
The first one I'll discuss a little bit, the other ones I will not. So this is my outline. I want to explain my interest. Why am I interested in artificial intelligence? Why is this become an
1:53
obsession for me over the last seven years? We'll go over some history of artificial intelligence, how it works. We'll assess some of the literature and then where it might be used in the future.
2:03
So those of you who know me clinically know that this is what I really do for a living and this sort of stuff.
2:10
But I've also, before I became a neurosurgeon, I was fascinated with math. I was always a math person and I'm
2:20
natural findings in math, so math that follows what nature does and what, and so on. It turns out that artificial intelligence is really good at finding some of these patterns, particularly in the
2:31
brain. So, and in reality, artificial intelligence is math. That's really what it is, particularly linear algebra, matrix theory, Bayesian inference, etc. I'm also a chess person. I'm a
2:47
daily chess player since I was six years old. This is my first chess computer when I was eight. This is one I won in high school. I watched as Deep Blue played Gary Kasparov. I played all of those
2:59
games out 30 something years ago with great interest. In fact, I probably spent way too much time playing chess. And thankfully, I pulled back from it because this is a great quote by Paul Morphy,
3:11
one of the great chess players. The ability to play chess is the sign of a The gentleman, the ability to play chess well is a sign of a wasted life. I think that's very true. And I'm a daily chess
3:21
player. I play one minute games every morning at 530 in the morning at my desk just to wake up my mind. So at the same time, I was utterly fascinated with what artificial intelligence did to the
3:31
chess world in one day of training, one day of training. Alpha Zero became the best chess program in the world. How could you not be intrigued by that? I did some formal studying with MIT and all
3:47
their online courses. But I'll say that the best resources for people with limited time, like myself, like all of you, MIT has 7, 000 lectures online, 7, 000. I honestly don't know why anyone
3:59
goes to college anymore. All of it's there. And it's the best lectures you can possibly have. I've gone through all of their AI stuff over the last six years. Google DeepMind is fantastic.
4:12
Veritasium, amazing 14 million subscribers.
4:17
My favorite is three blue one brown. This guy Grant Sanderson is unbelievable in explaining artificial intelligence and all the math behind it. I'll be using some of their material here today, but
4:28
you don't have to go back and get a PhD in this stuff. It's all available. I'm not the only one interested in artificial intelligence in neurosurgery. You see this exponential growth curve in
4:39
publications and AI in neurosurgery. And by the way, I'll be using machine learning and AI interchangeably I'll explain that a little bit later. Neurosurgery is not the only ones interested. When
4:49
you take a look at these quotes by the biggest people in computing, I mean, software is eating the world, but AI is going to eat software. I mean, you see it compared to fire and electricity.
5:01
Bill Gates compares it to the computer revolution and the internet. And I think for good reason. But for the residents who don't understand this, I mean, I started medical school before the
5:14
computer revolution.
5:17
everything that you see here in yellow was only possible because of computers and medicine.
5:24
All of it. And if we're about to go through another revolution that all of these people are predicting, I would strongly encourage you to get as good a background in it as possible. Now, just for
5:37
one example, this is computer technology from 40 years ago, and Dave Roberts came up with this idea of frameless stereotypes. He most of you don't know the story, or Dave Roberts, wonderful guy,
5:48
former chairman of the board, chairman at Dartmouth, and you know, wrote a patent in the late 80s, and that became image guidance. Rich Beuholz, and most of you don't know Rich, but he's a
5:60
wonderful guy, former chairman at St. Louis University, wrote these patents that became the stealth station. And I asked Rich, well, how did you come up with all this stuff? And he said, it's
6:10
just linear algebra. It's the same stuff you might have learned in after calculus classes. This is early frameless stereotaxy. Some of you might remember the ISG wand. This is the early stealth
6:22
station. It's all evolved way past that. So this is what computers did to neurosurgery. Imagine what AI is going to do to neurosurgery. So I'm gonna take you back to the origin of both.
6:34
Computers and AI because a lot of the origins are from the same
6:40
people. So how many people know who this guy is? Anybody? Anyone ever hear of Claude Shannon? Claude Shannon, by far the most important person you've never heard of, by far. In fact, perhaps
6:54
the most important person of the 20th century. He came up with the concept of the Bit, binary digit. He didn't call it Bit, it was later named that. But that was his master's thesis at MIT. And
7:06
he won the, not a Nobel Prize, but the Alfred Nobel Prize, which is something for a junior investigator in 1937 for this. He came up with the idea of just turning everything into binary ones and
7:18
zeros and so on. So that's one of the most important papers ever. But probably the most important paper ever is this one, the mathematical theory of communication has 152,
7:30
000 citations. It literally started the information age. And yet nobody knows who Claude Shannon is. It's amazing. But anyone in computer science knows who Claude Shannon is
7:42
Not too surprisingly, he was an excellent chess player. Here he is playing Mikhail Botfenik to a draw. Mikhail Botfenik was the world champion who was the coach of Gary Kasparov.
7:55
So in 1950, he also created the first artificial chess program. Here it is. That's the first chess computer he's playing with Ed Lasker, a grandmaster right there So that idea later led to deep
8:10
blue that took 1997 in Kasparov Gary down.
8:15
That's a history worth knowing in and of itself. Later iterations of chess computers, just straight analog chess computers with human
8:25
feedback and input led to a thing called Stockfish, which was the strongest computer program for chess in the world for a decade until it met this guy. This is Demis Hasebus, who is the
8:40
CEO of DeepMind He was a junior chess master, 2300 rated, and he gave up chess to become a cognitive neuroscience PhD at Cambridge and wrote these important papers describing AI algorithms that
8:59
would play chess and then the board game go. So, when his program with nine hours of AI training played Stockfish, it won 28 to zero It wasn't even close, 9 hours. of training, nine hours. It
9:17
played itself 400 million times and became the strongest program in the world. When you see this and you realize the power of it, you can't look away. You realize this is going to be very important
9:27
for everything and this is 2017 when I played all these games out, it was just unbelievable. Before you feel too bad for Claude Shannon, his computer, Claude Shannon also created the world's first
9:40
working AI program This was his mobile mouse called Theseus that could find its way through a maze and then you change the gates and it'd find its way, again, all with an AI program. So he won
9:55
either way. Now, why am I talking about chess so much besides the fact that I love chess? Chess is the perfect AI target. Keep this in mind, when you think about is my project appropriate for AI
10:07
analysis, you have to think about chess. Chess has perfect inputs There's 32 pieces on 64 squares.
10:16
perfectly defined, and there's a perfectly defined output, and that is checkmate. So when you're thinking about your project, is it appropriate for AI? Think about that. Perfect input, perfect
10:27
output. It doesn't matter how complex the middle portion of it is. Chess has one times 10 to the 120th potential games, 120 zeroes, it's infinitely complex, but AI can handle that with a big
10:41
enough neural network where they use actually a Monte Carlo sorting algorithm for this,
10:47
but it has perfect inputs and perfect outputs. So after Gary Cosperov lost to Deep Blue, he eventually came around and understood that AI was a very, very important part of the future and
10:59
encourages everyone to partner with AI.
11:04
I finally got to meet him at the Congress meeting. He asked me, my rating, I said 2030, he was not impressed, but was nice enough to take a picture with me So when you look at early AI
11:14
development in healthcare. This is a really remarkable publication. Just looking at retinal fundi in tens of thousands of retinal funduses, just looking at the pictures without really even knowing
11:27
what it's looking at. This AI program, starting from bottom, going to the top, could determine a patient's blood pressure, smoking status, hemoglobin A1C, BMI, age, and for the first time in
11:41
history could determine gender In fact, there's 100, 000 ophthalmologists actively working, and they didn't know that you could tell someone's gender by looking at their retina, 97 accuracy. So
11:53
it finds all sorts of hidden patterns, and that's very intriguing for neurosurgery. This is, these are some of the crazier cases that I've taken care of. And I really wanted to know whether or not,
12:04
if I created a large knowledge model, whether large language models like chat, GPT, could work through some of these problems to answer some questions like vascularity, firmness.
12:15
Where are the cranial nerves, et cetera? If you put together a large enough database, I actually met with the DeepMind team in 2018. They flew out from London. We got the contract negotiations,
12:26
and my institution shut it down because they didn't want to share data. A year later, Mayo Clinic signed a 10-year contract with DeepMind.
12:37
Anyway, so I'm gonna show you some easy early wins for neurosurgery for artificial intelligence. This is a case I did about four months ago. It's a standard Clinodal Meningioma. Looks pretty
12:51
straightforward, losing some vision. There's an old saying in radiology, the most commonly missed lesion on imaging is the second lesion. So there's the obvious lesion there, and this is what we
13:03
discovered during surgery.
13:07
Something didn't look right. My chief resident was working this tumor down, something it looked wrong on the carotid. You can see the carotid they're approximately the optic nerve Yeah.
13:31
There was an aneurysm
13:34
buried in the tumor. How do I get rid of that echo?
13:40
What is it?
13:44
I got it. So there was an aneurysm buried in the tumor. In fact, there were two aneurysms buried in the tumor. We didn't know it. Two radiologists missed it. I missed it. Thankfully, we found
13:56
it right before we bit into it. We later fixed all of that with stent and coil. But these are things that you don't want to have to
14:06
rely on somebody with 25 years of experience who intuitively looked at it and said something doesn't look right here. An artificial intelligence program does not miss the second lesion. And you can
14:17
see there on the coronal, you can see a little blip off the carotid. It just looks like a bend. But an AI program probably would not miss that. So I think that's an easy win for an artificial
14:27
intelligence program. This case I did last Monday, Patrick Clivalman in Joma around the Basler. You know, it's going to be a complicated, difficult case. It's heavily calcified in vascular and
14:40
so on. But the question
14:43
You go to a trans-petris or an anterior-petrisectomy with a trans-silvian view, and your decision really comes down to where are the cranial nerves. He's got a six-nerve palsy. You know the six
14:55
nerves are going to be involved. It'll be buried in it. The real question is going to be where's three and five? If three and five are up, then you come from both behind. If they're down, you
15:04
come from above. In cases like this, you simply have to guess If you enter enough of these cases into an AI program, you should be able to find patterns. Unfortunately, I made the wrong guess.
15:16
This is my retro sigmoid view last Monday. You can see the fifth cranial nerve right up here, 7, 8, 5, 11, all that tumor. My window that I wanted to get the tumor out from was shut down by the
15:31
fifth nerve. It took a long time going between those windows and thankfully things worked out This is something these sorts of patterns in AI program may pick up for.
15:43
Here's another case from Wednesday. This is a fortunate case. The operation went well. AI won't help me with the operation, but it will help me figure out how to reconstruct this person's forehead
15:57
once that juvenile angiofibromas out.
16:04
When it comes to pituitary adenomas, this is a series of 46 giant adenomas that we just published last month I'm always wondering, where's the optic nurse? Is it a prefix chiasm, post-vix chiasm?
16:16
That tells you whether or not this is a transphonoidal case, trans cranial case, combined case, et cetera. We came up with some indices based on cranial metrics and so on that we published a few
16:30
years ago. I would love for an AI program to have a look at a series of cases and just decide whether or not that's true
16:39
So I'm thinking about it from an intraoperative and preoperative decision making.
16:43
standpoint, but there's obviously so much more that we can do. Getting back to the history, I mentioned Claude Shannon, but you should also know Alan Turing, John von Neumann. You've probably
16:51
heard these names before, giants in the beginning of computing. But this person you've probably never heard of is Frank Rosenblatt. He was a young PhD at Cornell. In the mid-50s, he created a
17:04
program called the Perceptron. This was the first computer program that was not game-related, that was created. And he essentially just used the i-model. He looked at the retina, interneurons,
17:17
and then an output level. And he was able to train it to say what's a circle, a square, or a triangle. It was very limited, but it was the first effort. You would have heard of Rosenblatt, but
17:29
he died in a boating accident shortly thereafter in Chesapeake Bay. So by definitions, AI is the biggest category, and it's just a generic term for cognitive Machine learning is a subset of AI. If
17:44
you look at the diagram on the right there, you get an input, you extract features, there's a single layer between input and output, and that's called the hidden layer, and then it determines is
17:56
that a car or not a car. Deep learning is a little more complicated. There are more layers, feature extraction, and classification go into the matrix.
18:07
So when you compare artificial intelligence and natural intelligence, size seems to matter. If you look at a variety of species, there does seem to be an ascending line of intelligence and size,
18:20
there are, of course, notable exceptions, things like crows, ravens, et cetera, that are very, very smart, but small. When you look at hominid species, and you look at astralopithecus with a
18:31
350 ECC brain versus modern homo sapiens between 11 and 14, There does seem to be a clear ascent in intelligence. Despite the fact that having a big brain is very expensive, having 20 of the
18:46
cardiac output go to a three pound organ is a very, very expensive model. And there's of course all the dangers of childbirth. But the human brain memory capacity is remarkable. It's actually 25
18:58
petabytes. That's 300 years of TV watching. You just imagine that much capacity in your brain. And then we take a look at artificial intelligence and what's happened over time is the computers have
19:10
shrunk, but the connections have grown dramatically, actually exponentially doubling in size initially every 18 months and now every two years, that's called Moore's Law. And there's a competition
19:25
every year put on by Fei-Fei Lei, who's a
19:31
artificial intelligence expert at Stanford. It's called ImageNet. And what she would do is she would encourage anybody that's developing artificial intelligence programs to this competition whereby
19:43
they would be given unknown objects and then they would determine how accurate the program was or not. The best they could do before 2013 was about 30 error rate and then there was a sudden drop by
19:56
half in a year and that all came because of larger neural networks and much, much more data with the neural networks being trained on much more data and it's only gone down from there and it's
20:10
dramatically more successful now. So that was the first real clear evidence that huge neural networks and tons of data going in was the key. Now that's important to understand that data is going up
20:26
on an almost exponential curve. These are zeta bytes, whatever that is. I think it's like a billion gigabytes of data being created and it's going up dramatically There's so much more data to be
20:39
trained on. There's so much more money going into artificial intelligence. Hundreds of companies going into this. And you take a look at healthcare data. These are in exabytes with a 48 annual
20:52
increase in healthcare data. So you imagine there's going to be so much data to train on. Now, when do you use AI versus standard statistics? This is an excellent paper from September and JAMA.
21:04
Bottom line is
21:07
artificial intelligence, as I pointed out with chess, the best model is really clear, vectors going in, really clear
21:18
factors going in. It doesn't matter how big the data set is, it can handle the data set, that's okay. And then you have to have a very, very clear output. So it can't be a ratio of something
21:30
that moves, it has to be a very clear output. And the advantage is monster data sets, The drawback is it's in a fake process. It's hard to reconstruct it. It doesn't come out with P values. It's
21:44
hard to retrofit the data because what happens in the middle hidden layers is really difficult to deconstruct. So it's very difficult to audit. So before you consider using an AI model on your
21:57
project, you have to think about these things. Can you internally validate it? Can you externally validate it? And how clear is your indication? So going back to the perceptron.
22:11
When you think about the original model of an AI
22:16
program, and take a look at this, here's a circle in 784 pixels. So what you do is you take 784 pixels as vectors on the left, and then you put them through these hidden layers. You give each
22:29
layer, each node gets a weight and a bias. So the weight being more likely to conduct or less likely to conduct The bias is where you set, is it going to fire or not? point five or you can set it
22:42
05 whatever you want. How you set it initially doesn't really matter because you're going to fix it on the way through. And then it'll either decide it's a square, a circle or a triangle. Those
22:53
which are correct are reinforced. Those which are incorrect are then fixed on the way back through back propagation. And so it's essentially just a lost reduction function. This is just a function
23:06
generator is what it really is. And then you bring in another circle that looks a little different or in a different spot, etc. And then you train it to figure out what is a circle. There's a
23:15
little math that goes on. There's a thing called a relu, a rectified activating unit that gets rid of negative values to keep it from shutting down nodes and then a softmax to bring anything that's
23:27
higher low between zero and one. So you can use it as probabilities. So when you think about a 784 vector input with two different layers with All of these parameters, these weights and biases,
23:41
you've got 13, 000 knobs that are relatively small. Net here, so there's a lot of power in this. Compare that to GPT-3 that is 175 billion parameters and 300 billion tokens. The token is the
23:55
smallest unit of language. And then compare that to brain scale with 100 billion neurons and 100 trillion synapses. And the bigger you get, the more generalizable is the intelligence.
24:09
So, what's going on in neurosurgical literature? Well, this is an excellent publication, I think is as a starting point from the Miami group. And looking at these graphs here, you just take a
24:19
look on the right. Most of the peer-reviewed publications and artificial intelligence in neurosurgery are coming out of the US, the West in general, and then the Far East. Institutions doing that
24:29
publication, well, UCLA made the list. The California system made the list UCSF, Mayo Clinic, Harvard, and Northwestern. So there is a cluster of groups that are forming that are doing a lot of
24:42
this work. And what are they talking about? Initially, they was talking about machine learning, artificial intelligence. And then it breaks out into subtopics in neurosurgery. And the success
24:52
will be measured by how well their data fits a chess model. This is just from last month. This is all literature from neurosurgery, artificial intelligence for the last month, looking at TBI,
25:07
looking at cervical spine fractures, 118,
25:12
000 patients with cerebral aneurysm, looking at rupture risk, and another big review in spine surgery.
25:20
So how do you know if it's good or bad? You can deconstruct
25:27
standard statistics, it's very difficult to deconstruct artificial intelligence models So this is a very good paper from the Harvard group. point out, when they're looking at how do you evaluate an
25:40
AI paper? Virtually everything here is very, very similar to standard statistics with the exception of the model development and model performance. In the model development, you really have to
25:56
think about is, can you do internal validation? Can you check before the model is done? And typically, you'll put 80 of your training data in the model development and then 20 you keep on the side
26:11
for testing the model. So can you do internal validation and how interpretable is this going to be? In other words, if it spits out a number or finding how are you going to be able to figure out
26:25
what it really means? And then is it externally validatable? In other words, are you going to be able to save some data besides the test set to make sure that it makes sense? Or can you validate
26:36
it against an expert? it opinion.
26:41
And how well does it comply to standard multivariable prediction models? So when you're doing a multivariate analysis, this is the tripod compliance. So how does it do against this? This is a
26:55
review of what's come out in the last three years in neurosurgery in machine learning. And the answer is there's about an 80 compliance with tripod. The biggest thing where it falls apart is only
27:07
about 15 of these papers have any sort of external validation at all. Without external validation, you don't really know what to do with it. I put this paper here in part because I know the story
27:18
of it very well because the lead author here is one of my neighbors and works in my wife's practice. What they did here is they looked at tens of thousands of chest radiographs to figure out well
27:30
what's there to know? What can you do? So they literally just looked at chest radiographs and found out that they could determine whether or not someone's diabetic or had an elevated hemoglobin A1C,
27:41
looking at the chest x-ray. Well, it's more than just looking at systemic and external fat and so on. They couldn't figure out how the model could figure it out until they started to do sort of a
27:54
back analysis of it. And then they figured out that it was figuring all of this out by taking a look at visceral fat in the thorax So if you look at here on the right side, you look at the true
28:07
positive in the bottom right. That's all visceral fat. And then you look at the true negative in the upper left, very little visceral fat. And it's the distribution in the amount that determined
28:17
it. They couldn't know that. They had to retrofit everything in order to figure out how it was figuring this out. And this was published in Nature Communications.
28:31
That seemed to get this to move.
28:35
It really seems to like that slide. So is your project appropriate for AI evaluation? So I get back to the question, is it like chess? Do you have a perfect vector set going in, a very
28:49
well-defined space, and a clear input? Can it be reduced to linear algebra? If you can't reduce it to linear algebra, you're probably not going to be able to use AI So vectors going through
28:59
matrices, ideal for huge digital data sets where you might find some hidden patterns, ideally for novel data sets without precedent, and then, again, you have to figure out a way to externally
29:12
validate it. So here's a good example of this. This came out about six months ago in JAMA Neurology, taking a look at an AI program reading clinical EEGs So this was trained on thousands of EEGs,
29:27
and then externally validated against a panel expert neurologists looking at this. And again, very, very good inputs.
29:37
A 1020 array is a 1020 array. And it's going to be the same channels in the same places. And the output is, is it a seizure or not? So good inputs, good outputs. Doesn't matter how complex the
29:48
data is. And of course, EEG is very complex. Well, it turns out that this program is equivalent to a panel of epileptology experts
29:59
So a little bit of math is now equivalent to epileptologists, the best epileptologist. I brought this to the attention of the head of our epilepsy group. It's there's this seven epileptologists at
30:11
Rush. And I said, have you read this paper? He said, yeah. I said, what do you think? He said, I'm glad I'm about to retire.
30:21
So, here's some other easy wins. You all hate putting together discharge summaries. The people who read them hate reading them. Can it be automated? And the answer is yes. Yes, it can be
30:33
automated with very good accuracy. That's an easy win. That's easy in, easy out.
30:41
Coding, relatively complex to memorize all these numbers and memorize them very easily That's another easy win.
30:51
How about novel data? So this is the ICVP trial that I'm a part of. It's a clinical trial and wireless implants for artificial vision for blind patients. This is what the implant looks like at 17
31:06
electrodes in each one of these tiny chips. This is the size of the electrode This is the first volunteer patient that I implanted in February. of 2002, and that's the array that I implanted. You
31:24
can see it there. Here's the patient now. This is artificial vision. You see all of these phosphines. He gets in his visual field. We have to, with a massive amount of data, two years worth of
31:35
data, we have to train the external camera to the electrodes here to fit his visual field. The only external validation we have is can he find that coffee cup or not? Can he find the person in the
31:50
room? And actually he can and be used to artificial intelligence programs for that.
31:58
So when you think about neurosurgical AI, there's, it looks like a lot of blue ocean and we're making a lot of waves in it and I think it's an exciting time. But we have to remind ourselves that
32:10
there's a lot bigger, bigger boats in the ocean that are gonna create their own waves and CMS insurance companies, healthcare systems, big law, etc. are also very interested in artificial
32:22
intelligence. And you know, you're only as good as your input and your output. So from your input standpoint, this is a great example of this. This is MIT's Norman psychopathic AI. You can get
32:34
on the website, it's just fascinating and horrifying. This is created in 2018. This was an artificial intelligence program trained entirely on a subreddit of people who are obsessed with violent
32:46
death So that's all this program has ever seen. So when you show Rorschach pictures like this, standard AI programs see umbrellas and Norman sees nothing but violent death. In fact, that's all
33:00
Norman can see. So you're only as good as your inputs. From the output standpoint, if all you care about is money, that's all you're going to get. So it turns out that online advertisers decided
33:13
to take a shortcut only advertise based on without having to vet the websites, they took a look at where are the keywords that they want to advertise with. So not too surprisingly, some internet
33:28
pirates set up their own websites with nothing but massive amounts of keywords. And last year, 20 of online ads went to these AI generated, essentially, pirate sites. So if all your programming
33:41
is money, well, that's all it's gonna see. That brings us to the next point So Cigna, Atna, Blue Cross, et cetera, massive healthcare databases. You'd think that they'd be working on curing
33:55
cancer or stroke, et cetera. But in reality, this is what they've been up to. So Cigna was caught using artificial intelligence programs to deny access to care, essentially, to deny claims and
34:07
deny your presertifications for MRI scans, et cetera. They figured out that Cigna was doing this because the claimed denials were coming out
34:19
So they knew that they were not actually going through physicians as they claim to have been.
34:26
So really, what's their output here? You can tell what their output is. Healthcare systems are going to be using AI, and you can probably predict what they're going to be using AI programs for.
34:39
And they'll have a good excuse for it. Now, this is from JAMA Internal Medicine last year This is a comparison of physicians versus an AI chatbot response to online questions. And this was as
34:48
judged by a panel of physicians. The panel of physicians
34:57
found that the chatbot was
35:00
three times more likely to be considered informative and ten times more likely to be considered empathetic So there will be lots of excuses for replacing physicians in the future. So this is really
35:15
how AI has been used so far in healthcare.
35:19
And the public really does not understand any of this. And they're remarkably trusting in a recent survey, two thirds of patients said that they would use chat GPT for their healthcare decision. So
35:30
what are they hearing from chat GPT? By the way, chat GPT is just chat generative pre-trained transformer. So when you ask chat GPT about brain tumor, what do they tell you? Well, they're gonna
35:41
give you some standard stuff,
35:44
but not as much as you would like, but they're gonna give you also a lot of alternative therapies for brain tumor. And they'll take you very, very deep into
35:54
hyperthermia, mind-brain body practices, et cetera. So that's what the public is seeing. Are you guys using the IQ platform here for your surgical scheduling? Anyone know? I know, I think it's
36:10
13 of the top 20 institutions in the US. Newsboro Report are using this platform I saw this in 2020, it was brought. to our institution, the moment they described it, I knew we were in trouble.
36:23
This is a sorting, stacking algorithm that maximizes minutes used in an operating group. So what does it hate? It hates cancellation, it hates emergencies, it hates double scrub, anything,
36:36
anything complicated. It loves quick, stackable elective surgery and hates everything else. So what does it really hate? It hates neurosurgery And ever since they implemented this, we've been
36:50
losing our time. And the hip and knee people keep doing more and more because the hip is, it's an hour plus or minus five minutes. And your skull base meningioma can be plus or minus multiple hours.
36:58
So that's been a disadvantage for us. As far as education goes, the
37:11
SNS has taken this on. This is a group of people that are looking AI and neurosurgical education. We're looking mostly at VR and AR applications, but other applications as well. So my advice to
37:27
the residents is, it may seem like hieroglyphics. You do not need to know what a relu is or a soft mass function, that's just stuff that I like to know. You don't have to know any of that stuff.
37:39
You do have to get really good at defining an objective, collecting data, cleaning the data, and then being involved in the process You don't have to worry about what happens in the network, but
37:50
you do have to be aware of that sort of thing. But most of it is going to be coming up with ideas and knowing where AI will help you and where it will not help you. I'll remind you that you are in a
38:02
fantastic place if you're interested in this. UCLA has huge resources and artificial intelligence. It's just all around you. In your own department, I found this paper Dr. Fried using EEG signal
38:17
to decrease. decode brain recognition of the visual stimuli. This is a fascinating paper. It's in your institution, it's in your department. I would strongly recommend that you get involved in
38:27
this. We talked a little bit about it last night.
38:31
So closing up here, I'm just gonna talk about one possible application for artificial intelligence going forward. Roy Bekay was one of my partners, some of the senior people in the room will know
38:42
Roy, some of you junior folks will not Roy was a giant of functional neurosurgery. Before he came to join us, he was involved in brain-computer interface implants for patients with ALS and spinal
38:54
cord injury. And by the way, a brain-computer interface was coined here at UCLA in 1971, just so you know. So he always encouraged me to be involved in brain-computer interface. And the top right
39:08
there, that's the third neuroplace implant in the world I did in 2003. I've always been fascinated with and kept involved in it, including the ICVP blindness project. But I'll just point out that
39:23
the first neural link - if you haven't heard of neural link, look it up. It's Elon Musk's project to bypass
39:34
for patients with spinal cord injury and ALS and so on - to bypass everything and just have EEG waves move a mouse so that somebody can interact with a computer The first neural link implant was done
39:47
10 days ago.
39:49
At Small World, somebody knew who the surgeon was. I forgot who mentioned that. But this device is FDA approved now, and there's thousands of patients on their waiting list, thousands. This very
40:06
well may be a part of what we do in the future.
40:10
And I'll just point out that there are models for how to get this done And this is precisely what we did. with our ICVP project, that's not something that we're investigating right now, but there
40:23
are groups that are investigating this as an AI brain computer interface link.
40:31
So what that means is that you need to continue to think creatively, it's a great opportunity to think creatively, to be involved in neurosurgical innovation, but you also have to be right place at
40:45
the right time You are in the right place at the right time. You're at the University of California system. You have AI resources all around you. The only other team that I think may have an
40:46
advantage on you right now is Mayo Clinic, which has put hundreds of millions of dollars into AI development. They have a massive clean data set of 10 million patients, it's all very, very clean
40:46
data. And they have a tenure contract with
40:54
Google's DeepMind team And
41:12
Google's Deep Mind Teams coming out with -
41:17
product called Gemini. If you haven't heard of Gemini, yet you will. It's stronger than chat GPT-4. Probably will be stronger than chat GPT-5. So they're going to combine that with this giant
41:30
data set. They also have projects with Microsoft. And the Kui Tao is one of the leaders in AI development. They just hired Kui and she's going to be at Mayo, Florida. And this is a meeting that's
41:42
coming up there in a few months. There should be virtual links for it. I would encourage anyone that wants to be involved in this to gather as much information as you can. So in summary, I think
41:55
this is an exciting time to be in neurosurgery. You are in the right place in the right time. And I would strongly encourage you to be involved in artificial intelligence and what's coming in
42:06
neurosurgery. Thank you.
42:11
You know, seeing is that, you know, with NURALY coming out there was no new implantable micro-electroderay for over 20, 30 years, and just with the influx of money that NURALY was able to put
42:24
into the project, over the course of four years, they have an implant implanted device in a human, something that BlackRock has been trying to do for 20 years with their UTARA One thing that I
42:39
wonder what your advice is, is someone that's interested in doing this kind of work, funding is the major issue. It feels like a lot of public funding is still lagging on, and that almost
42:52
immediately, the private company that has the funding to do this work will do that, and will fund it And once they have a good enough data set, it's game over, and see talking about, you know,
43:05
you're talking about how deep mind was trying to work with. your institution and your institution shutting it down, being at a UC, being able to get any access to patient data is also another
43:17
hurdle. So how do you sort of navigate that aspect and also in terms of funding sources, what sources do you know of that are out there that know are sort of at the forefront of knowing that this is
43:30
a way that's coming and that needs to be fun? Well, there's going to be a lot of private money that will come in, but there needs to be a breakthrough, just similar to chat TPT. I mean, before a
43:41
year and a half ago, no one was really talking about large language models, but there was a major breakthrough and it actually surprised them how successful it was. And that's what I mean, the
43:52
bigger the model you have, the more power it has, and that goes up exponentially. They weren't aware of it. They were holding off on chat TPT-5 allegedly until after the election, because they
44:04
don't want to be blamed for having swayed
44:08
That's literally the rumor that's why they're holding off. So there's going to have to be a breakthrough when the breakthrough comes. There's going to be lots and lots of private money that's going
44:19
to come.
44:21
Whether a neural link provides that outcome, I don't know, the probability Utah rate - well, it was wired there. Over time, SCAR would build up around the electrodes So you get a decreasing
44:36
amount of
44:38
signal over time. And the reason why I went with a wireless floating array, because the problem with Utah, anything that's wired, there's going to be micro-emotion, because it's tethered and the
44:49
brain moves. So it's always going to be that micro-emotion that's going to build up SCAR over time and you start to lose your signal. With a floating array, the device moves with the brain So you
45:02
get less SCAR. So I think that's part of the problem with the Utah array. That being said, it was a major breakthrough. time. So I think you're going to have to wait for a big break through. I
45:12
know that your neural link is going to be the thing. They're very, very quiet about what they're doing. I mean, literally it was released as a tweet 10 days ago. I mean, they're, they're,
45:24
they're not showing their hand and what they're up to. I think the answer is you seem to get a guy with200, 300
45:36
billion to get an interesting question. Just a comment. I mean, this is really amazing work, obviously. Great. Some of it, some realization of it. I think, you know, thanks for your question,
45:44
in my opinion, is that to get the money, you have to have like a business model for it. So like a lot of the research has been done in very smart, good use cases, you know, people that can't see,
45:57
people that can't
45:60
move in arms and legs can control the computer. There's only so many of the people out there, you know, you have to like have like a use case for something
46:09
It makes them a lot of money, like a lot of money. And then you'll have tons of funding going into it, right? That's kind of like how the stuff runs.
46:17
Yeah, the technology is held back, the business case is held back. But I mean, at some point, there needs to be a breakthrough.
46:26
I can tell you with Bithars, we'll be publishing our data with the wireless array. It's really interesting. I mean, he can actually navigate based on artificial vision. That's proved here, by
46:38
the way I mean, that or something like that, we're here. That was a wired setup. And we'll see what a wireless does. I get to get a cure obesity. It's a lot of fun to make.
46:54
So, thanks very much. We hope you enjoyed this presentation. The material provided in this program is for informational purposes and is not intended for use as diagnosis or treatment. of a health
47:10
problem or as a substitute for consulting a licensed medical professional. Please fill out your evaluation of this video to obtain CME credit.
47:26
This recorded session is available free SI on
47:31
Digitalorg. Send any questions or comments you have to osmondsidigitalorg.
47:39
This program is from the UCLA Department of Neurosurgery, Linda Liao,
47:46
the chairwoman in its faculty, and is supported by the James I. and Caroline Arlesman Educational Foundation, owner of SI and SI Digital. And by
48:00
the Waymaster Corporation producers of the Leading Gen Television Series, Silent Majority Speaks, Row Models, and the medical news network.
48:20
Thank you