Welcome back!
Or, sign in with your email
Don’t have an account? Subscribe now
Technology is reshaping the world at a pace few people, inside or outside the industry, expected. But every so often, you meet someone who has not only witnessed the major waves of technological change, but helped build them. In this conversation, Marcus Fontoura, Technical Fellow and Corporate Vice President at Microsoft, CTO for Azure Core, walks us through the story of AI, what leaders are getting wrong, and how to develop the one thing that will matter more than any model or algorithm: human agency.
Marcus has lived through every major inflection point: early search, the rise of cloud computing, and now large-scale AI systems.
One of the first things he challenges is the popular narrative that we are heading toward an AI apocalypse, or an AI utopia. Both extremes, he explains, miss the point:
“My approach was more like, let me just explain what the technology is and what it does… it’s basically a prediction system.”
Marcus offers a clear explanation of modern AI. He compares today’s large models to a system that has:
“Read nonstop for fifty thousand years… with near perfect memory.”
But this doesn’t make AI a mastermind. It makes it a stochastic parrot, extraordinarily capable, but not self-directed.
He also emphasizes that while AI will automate the mechanical layers of work, it will amplify, not replace, the leaders who know how to think:
“If your job is typing in a spreadsheet… then I would feel scared. But if you have the knowledge and experience to really add value, I wouldn’t feel scared.”
His point is: the danger isn’t AI. The danger is becoming someone who only performs tasks AI can do.
We also cover the uncomfortable but increasingly visible trend: people relying on AI so heavily that they lose their independent critical-thinking muscles. Marcus acknowledges the risk:
“That is a little bit concerning… we will see good uses of technology and uses we don’t want to happen.”
He stresses that organizations must raise the bar for juniors, not lower it, and that AI helps experts more than novices:
“More experienced folks already know what to expect… junior employees may not know what is correct or incorrect.”
This is one of the most important insights in the entire episode: AI accelerates expertise; it does not create it.
On hallucinations, Marcus is exceptionally candid:
“The more we use it, the more you have techniques to avoid it… but we have to double-check those things.”
On leaders fearing displacement:
“Use AI in a way that amplifies your skills… automate the mechanical tasks and focus on what only humans can do.”
And on what truly matters in this moment of technological upheaval:
“Technology shouldn’t influence us. We should influence what we want to see in our society.”
And he gave a useful explanation of the names of ChatGPT models:
“When you say that bigger AI models, when you move from ChatGPT three to four, four to five, basically these models have more parameters. So this means that you read a lot more, but also you memorize a lot more.”
This conversation is a reminder that the most important focus should not be AI, it’s the leader using AI with judgment, clarity, and agency.
Get Marcus’s book, Human Agency in a Digital World, here:
Here are some free gifts for you:
Overall Approach Used in Well-Managed Strategy Studies
Enjoying this episode?
Get access to sample advanced training episodes
Episode Transcript (Automatic):
Kris Safarova 00:45
Welcome to the strategy skills podcast. I’m your host, Kris Safarova, and this episode is sponsored by strategy training.com you will be able to get key insights and action items from this upcoming episode at f, i, r, M, S consulting.com forward slash action. And we also have some gifts for you. You can get access to Episode One of how to build the consulting practice at firms consulting.com forward slash build. You can get the overall approach used in the well managed strategy studies at firms consulting.com forward slash overall approach. And you can get a McKinsey and BCG winning resume example, which is actual resume that led to offers from both of those firms. And you can get it at firms consulting.com forward slash resume PDF. And today, we have with us Marcus fantura, who is currently in his second tenure as technical fellow and corporate vice president at Microsoft, where he works as CTO for as your core and he’s also the author of human agency in the digital world and a platform mindset. Marcus, welcome.
Marcus Fontoura 01:53
Thank you so much for having me. Kris, a pleasure to be here,
Kris Safarova 01:57
so I’m so glad to have you with us today because of your background, and speaking of your background, I’ll just start there, maybe very briefly, you could give us an overview of your journey that led you to this very important role you have right now.
Marcus Fontoura 02:12
So originally from Brazil, so I did my most of my studies there, and then I moved to Canada for part of my PhD program. And then I immigrated to the US in 99 for for a postdoc in Princeton. And from there I been to into several startups. Like, I started at IBM Research. And then, like, then like move in the valley, and then like Yahoo, early days, and then like Google, like not so early days, but like in the beginning of search. So I worked in a lot in web search throughout my career. Then I moved to Microsoft to to work on Bing, also like in the area of search and advertisement. And then within Microsoft, I had couple different roles working on the infrastructure for Azure, our cloud service. And then I had never worked for Brazil, so I bounced back to to to work for a FinTech in Brazil for a couple years at as a CTO, and then just came back to Microsoft to look into like cloud services and as part of the Azure offering and some CTO for Azure core, and also looking into infrastructure for our Azure services and AI and like, integration points of like, several of, like, large scale AI offerings as well.
Kris Safarova 03:55
Have you always known that you wanted to be a tech guy?
Marcus Fontoura 04:00
No, like, that’s a great question. Like, I was, like, more of a math person, like, growing up, like, so I love math, and then so I went. I thought I would go to school for math, but then, as part of, like, my first year of college, I had to do, like, an introduction to computer science class. So I fell in love with computer science then. And then I switched careers like, and then it’s and did computer engineering for for undergrad. I really love programming, like, and then this was even before the internet was, like, it’s like, in the, like, mid 90s, so, like, we had emails, but like, it was before web search, before the browser got really popular and on. But I really love programming and then, and then I’m glad to switch, because. Like, I really like, think it’s like incredible journey like that to be like, in first.com bubble, then search, and then now, like, cloud computing and AI, like, see how that unfolded. Was, was really fantastic,
Kris Safarova 05:19
Marcus, and if you could speak to yourself. Then when you went to that first class, and you realize you know what I want to learn programming, if you from the future told yourself What 2025 would look like, how would you react?
Marcus Fontoura 05:32
Yeah, I think, like, we made tremendous growth that was very hard to predict, right? Because, like in I remember when I saw the first time I saw mosaic, the first web browser, and then, like, I had a professor showed me how search work. I couldn’t even grasp, like, what it was doing, because, like, I didn’t really understand the concept of, like, all the websites. And then, like that we could search across so and seeing how that we we were able, like with technology, to democratize a lot of our understanding about communication, about like cooperation, how we do research, like, like, back back then, I think, like, you had to do research, you had to go to a library and manually search for papers and the manuscripts, and now we can do most of that online. So it’s incredible to see how information technology became like so prevalent in our lives, right, and so central to our lives, and something that I wouldn’t wouldn’t have predicted back then.
Kris Safarova 06:44
And given that you have unique experience of witnessing all that innovation, all those changes, pre internet and search and so on and so forth, where do you think we are going? Let’s say in five years, 10 years, what do you think most likely the world be like when it comes to AI automation technology,
Marcus Fontoura 07:03
I think we’re at a big inflection point now, things like couple years back, like when we saw the first large language models and chat GPT. It’s to me, like AI is much more of an evolution than a revolution. But like, because, because we can see that like since from the early 2000s we made, like, tremendous progress in, like machine translation in also like IBM beating Kasparov in chess, then IBM computer playing Jeopardy and being successful in question, question answering in jeopardy. So more and more we seen with the amounts of data that we have in the world and the computational resources that we have available to us, we can do more and more with intelligence that we are building. And the large language models is like the last step in this innovation, and it’s really not new, right? The transformer papers, transformers are, like, the fundamental architecture for a large language model, and their papers, they were the original paper about transformers was, like, a few years back, like, so we are, like, doing this for a few years, right? But like, chat GPT really popularized. I want what, what we think, like AI is like, in terms of, like, the Chatbot interface and and I think it really blow, blew the minds of everybody away, like, because even people working on the the initial language models, they couldn’t really grasp that it would be good for so many tasks. What really changed is that, before AI was developed to for a specific task, and now with large language models, the models are not tailored for any specific task to begin with, like but then they can be adapted to so many tasks. So I think we will continue seeing that, that that evolution going even beyond what we see today, like with more powerful models and and also is smaller models that are more target to some piece of knowledge, to some area, right? So like, I think radiology is one area that there is a lot of interest for us to make a huge impact. And like lots of aspects of medicine, lots of aspects of customer support, and then so I think we will see a lot of use of AI. And I think like the question is, how would that impact us humans, right, like that, living through that? And then I feel, my take is that, like it will really amplify our ability to. Do more and to do more interesting tasks in a way like, I think the comparison is like when we had like word processors in the beginning, like in and then we saw like the first spell correctors, that if you type something wrong, it would correct for you, and then we could see grammar correct, correction. And then now with chatgpt to even write some paragraphs for you or the whole text for you, I think the same will be like in other professions as well. Like so once you have like the tool to automate some of the more mundane tasks for you, you can focus on the higher level abstractions, right? What do you really want to write and focus on? Really like the tasks that require more your human intelligence and and what an analytic skills that computers are still not there and then, like, I think that’s what we’re going to see, more and more intelligence from from tools, automating them, more mechanical tasks, and we up level up leveling a bit and working on more like the the high level cognitive skills and tasks that only us humans can do. Marcus.
Kris Safarova 11:19
And are you not noticing that people who are using AI a lot, instead of spending more time doing deep thinking, doing tasks that require you to use your mind, they actually starting to lose some of the critical thinking skills.
Marcus Fontoura 11:34
Yeah, and that, I think it’s a little bit concerning, and with any new technology, we will see this, right? Like, when we, like, build something new, like, there will be like, users that are more appropriate for like, amplifying, like, our humanity and users that are not so great, right? Like, for instance, like, even if you think of social networks, it’s great in the sense that they can connect people. It’s like we already have, like many studies that show that social networks are very helpful for job finding and connections, like through weak ties and things of that nature. But on the other hand, like there is also the negative use of social networks that is being well documented, and mental health impact in teenagers and negative impacts in society. So I think any new technology is very hard when you are developing the new technology, for us to predict that it’s going to be used. I feel that now I think the users in education, for instance, like people using to do their assignments for them and all that, like they use it. We don’t want to happen, right? We really what we want to happen is that the assignments and the problems that students do to become more complicated that they need. And even to me, I think the more more interesting thing would be, like, use chatgpt and analyze this, and then write and then tell me what’s wrong about it, and then, like, assume that your the student is already using chatgpt and ask for a high, higher level comprehension task, right Analysis task, that that only the students can do. And then I think, I think this is really how we’re going to raise the bar. And I think we’ve, and I think, like, there will be a while for us to to really understand how much can we raise the bar? We are seeing a lot with programming now Kris, like, when I think a lot of the tasks that junior engineers can do can already be automated, but then, instead of like, saying like, we don’t need junior engineers, what we’re trying to do is like, raise the bar. So ask junior engineers to be able to do more and more and more complicated tasks. So that’s, I think, what we have to do for with students, or like any area that we apply AI to
Kris Safarova 14:10
when it comes to professionals, business professionals, people getting an MBA, people working for major management consulting, major other organizations. How scared should people be about being replaced and being irrelevant, having to start from scratch again and figure out how to be useful in the world?
Marcus Fontoura 14:30
I would only feel scared if, if, if you are not able to do anything that is not mechanical, right? Like, if your job is, like, typing in a spreadsheet, something that the computer could do for you, like, then I think I would feel scared. But like, if you add, if you, if you have the knowledge to really and experience to really add value to the bottom line, right now, I wouldn’t feel scared. Because I think, I think, like, what we really want to do is, like. Use AI in a way that will amplify your skills so so to make you more productive and really automate the mechanical and boring tasks, right? So one of the things that, like, one of the examples in my day job is like in Microsoft Azure, is to manage cloud service across the war, the world will need a lot of internet cables connecting the several data centers across the world, and in the past, we use like network engineers to monitor every time there is a one of these cables is cut, because if you have a cable that is cut, then we could lose connectivity to one of the sites. But it’s a very tedious task, right? Because we have this case, we have, like more than 400 companies across the world that manage that are the contractors that manage this cable cuts. And then if a cut happened, it needs to be communicated, and then we need to dispatch a service team there. They need to try to fix it, and then they need to engage. We were able to use AI to automate all that and lift that burden from from network engineers. And now they can really focus on how, how improve the quality of the network services, or do something more innovative, think about new technologies. So, so I think that’s a positive use of AI. So we really want to think, think it to me, like, I’m thinking it as a great accelerator, right? So it’s going to really amplify what you already do well, and then, like, really give you more time to focus on the tasks that are that that really matter.
Kris Safarova 16:41
Makes a lot of sense. So you recently wrote a book, human agency in digital world. What did you wanted to communicate? What were the key things you wanted to communicate via that book, especially given that you you have very tough role, and you have probably no time to sleep, but you found the time to write the book, so must be something very important for you to share with the world.
Marcus Fontoura 17:02
Yeah, so Kris, I was really thinking about this, especially my daughters were asking me, like, the same types of questions that you asked me, like, what is the future? What should we study? We all have jobs, and I feel that technology is more and more pervasive in our lives, but a lot of the time, times we feel that we are just bystanders, and technology is really like dictating how our life should go right like now we have to watch news through social networks. The way to connect people is like through Facebook or Instagram. The way to to watch the news technology driven. The way to communicate is like through email or like messaging in a cell phone, and I feel that like we have so many dependencies on technology, but most people are just like, not so well versed in in technology. So so I really wanted to try to explain in simple terms what, how some of the systems operate, so that I could that we can start taking some, some of the the burden from like, from the technology developers, to our to ourselves, so that we can develop really the agency that we need to to really think about, what are the systems that we want in in our lives, and what are the systems that we don’t want in our lives, right? So, and then I feel that we should really develop, try to understand it more, so that we can develop this agency and and I think one of the simple examples is social networks. I just said, right like so basically, what are the good uses of social networks, and what are the uses of social networks that probably we as a society could do without it, without it, and then, and I really wanted to level the playing field so more and more of the society can be engaged and have and have these discussions
Kris Safarova 19:08
for someone who is listening to us right now and they don’t have technical background, they built a successful career leading teams, being subject matter expert in their particular field, but not having technical background, and they now feel very scared. They don’t show it at home or at work, but they feel scared that they are falling behind. What would be your recommendation on, how can someone catch up enough as a leader so that they become AI literate, literate in terms of technological advancement that they need to understand automation, and how can they start getting enough knowledge so that they can start integrating AI and automation? Let’s say within their consulting practice.
Marcus Fontoura 19:52
I think Kris a lot of the news that we see today, they are very apocalyptic about AI, right? Like it. In if for both, in both camps, right? So one camp says, like, AI will take away our jobs and and the risk of like, robots dominating the world and ending the human race. And on the other campus, like, oh, AI is amazing. And, and you really amplify our jobs, and then, like, there is tons of books and material that legacy and op ed pieces in in the newspapers that you read that that will take one camp of the other. My approach was more like, let’s me just explain what the technology is and what it does, right, and then in the simple layman terms, so that everyone can understand. And it’s basically a prediction system, right? So that is like really trying to it’s a model that understands language really well, and then it’s a prediction system that that is working to with the knowledge of like the many, many attacks that the AI systems read like. You can think about an AI system like as somebody who read non stop for like, 50,000 years, or something like that, and then memorize all that knowledge. So, of course, like in many tasks, if you read that much and you memorize everything, you have perfect memory, or like near, near perfect memory, right? So when you, when you say that, like bigger mod bigger, AI models, right? When you move from chatgpt, three to four, four to five. Basically, these models have more parameters, so this means that you read a lot more, but also you memorize a lot more. And so of course, if you read and memorize that much like you’ll be able to do a lot of tasks and then, but basically that’s all you can do, right? So in the book, I mentioned that researchers called the AI systems of today, like stochastic parrots, because like parrots because they just can repeat things, and the stochastic because they work in this like probabilistic way, right? So we memorize all the stacks and then, based on the probabilities of the words like, we can predict what is in the next word that you should say to complete this language. But that’s and then I try to go in detail on how this thing really works. But once you understand that, that’s basically the technology is very clear that it cannot either destroy the world or like and dominate humans or cure all the diseases that we were saying like, so it’s much more in the middle ground, and it’s just another tool that we have in our end, is a very powerful tool, perhaps, like the most important tool that we build like, if you see the proportion of impact that it might have in a lot of the tasks that we do today. But this is yet another tool that we have that will, that will impact our lives, but, but it’s not like as apocalyptic as people are trying to make it sound like, and then, I think the key point is like, what are the key applications that we want to build with this technology. And that’s, I think, where our agency can come in and and, and we can have this debate, right? So if somebody says, Oh, like Kris, I can give you like a robot that can can teach your kids from like kindergarten up to like 12th grade, and they will, for sure get into Harvard. Like, is this something you want or not? Right? I would prefer to raise the kids myself, even if I’m not as perfect as a robot, because I feel that’s like part of being human is like us taking care of our kids and and being a part of the education of our kids, on the other hand, like, if the robot is doing things like driving the car for you, or like doing groceries for you, like, then is really automating parts of your life that you might not consider as essential. So I think the more we familiarize ourselves with technology. I think we can have, like, a more like precise debate and like, how you want to apply technology to our lives.
Kris Safarova 24:29
Thank you Marcus and thank you so much for writing it. I think that we definitely need more books like that that explain in simple language how technology works, what it does, what it can do, how to extrapolate what it will likely do down the road, let’s say, in three years and so on. On top of that, would you recommend any other books or any other sources of information for someone who is listening now and decides that they want to start catching up?
Marcus Fontoura 24:55
Yeah, there is a lot of introductory courses on on the internet. Like YouTube and even Coursera on like, how, how AI works. One great documentary is AlphaGo like that. Like, you can see this documentary on Netflix that talks about how, how they build the AI system that that beat the world champion in go like, with and that that see that documentary is like, great for understanding some of the key fundamentals. But like, I would just, like, search online and then trying to see all these resources, there is tons of resources online, I would, I would avoid, like, resources that are trying to be like, like, trying to make predictions on how the technology will be used, and focus on the resources that really explain how the technology works today, right? Like, and then, and then, really getting your hands dirty on, like, playing with it, right? So what you can do with chat, GPT, what we can do, if, like, if you write more complex prompts. And how can you use prompts to to really get some of your tasks automated? And then, can you really use, use it to help your code, then implement some agents that like that can help you to do some simple tasks and then really grow your understanding by doing little things that will really bring it to life, like because it’s not out of reach, right? And my point is that none of that is out of reach, right? How, if you really want to understand the math and the complexity of how it works in detail, then there’s another level, but to understand it at high level, and so that you can understand how it impacts your life is very, very simple. I believe that anyone that has an MBA really can easily learn it. And then I think even it should be a course that you take right as part for everyone, even the high school course, because I think more and more it’s so pervasive in our lives, right? I was one of the anecdotes as saying in talking about in the book, is like, if you’re just listening to a podcast or to music in your car, how many computer systems do you touch? And then there is, like, dozens of computer systems you touch just to listen music. And we just take these things for granted, but like, really understanding how, why these things are so interconnected. And then, like, if one of them fail, like, what is the impact on others, and why? Sometimes we have these massive voltages that we see like, from like, the system is not working, all these Interplay I think it’s really easy to understand at high level, right? So you’re not really become an expert, unless you dedicate a lot of time, but really to understand at the high level, there is lots of resources available that, and I give lots of pointers in the book, but also, if you just search online, there is lots of pointers.
Kris Safarova 28:12
Thank you, Marcus. And let’s say, if we take somebody who runs a boutique consulting firm, and it’s very small, they just started, maybe they have five people working for them, and they already at the point that they have some understanding of how AI could be helpful. Automation could be helpful for the business, but they now want to start integrating AI and automation into their business. What would be your recommendations, your advice on what they should be careful with and what they should pay attention to. I think
Marcus Fontoura 28:44
if you if you understand how the technology works, like you see that this problem of hallucinations is still present, and really in the current state of the the art, like there is no way to go about it, because AI is basically, as I said, is just, if you think about a human that read for like, 50,000 years, non stop, it will read all sorts of books, some books that are like written by credible authors, and books that are written by non credible authors. And then you condensate all this knowledge in a series of parameters, but like, when you’re repeating like, they have no notion of saying, like, this is credible and this is not credible. So a lot of the times, even when I’m doing my own research, like, we need to be super careful and and the more we use it, the more you have techniques to avoid it, to hallucinate so much, right? You can ask for for it, to provide only references with credible sources and so on, even if you say that, sometimes it creates some reference for you and pretends like it’s from. From a credible source. So, like, we have to really double check those things. But so that’s one major problem, and then the other major problem is, like to be to be careful how introduce it, because what I’ve observed this a lot of the times, like aI helps more more experienced people than less experienced folks, right? Because more experienced folks, they already have knowledge of what they expect, right? So they can just use AI and ask AI, and then once they once they get some some answer, they know what’s true, what’s not true, and they know how to tweak it to really, like, get that task done quickly and with precision. More inexperienced employees, I think they might not have that experience, because they just don’t know what to expect, right? It’s harder for from them to discern what’s correct, what what’s not correct, what is a good approach, what’s a bad approach. And then I think if I had a consulting, boutique consulting firm, I would spend a lot of time like training my junior employees, like, so that they even pairing them with more senior employees, so that they can understand how to use AI effectively. But also like raising the bar of the junior employees, because, because we don’t want to be in a situation that like we are, like creating more inequality, right? Like, and then like, having the people that are already senior to become more senior, and really making it impossible for anybody else to be productive, because the junior employees that are not experienced are in a situation that they will never become experienced, right? So I think we should be very careful about that scenario, of
Kris Safarova 32:03
course, and building on that. Do you see there are certain assumptions that leaders are making about AI and automation that should be challenged or refrained?
Marcus Fontoura 32:14
I think people now like it’s very wild what people would think, right? So and people saying, like, there were, like, a lot of people saying in the beginning that, like, AI will take the jobs away from radiologists, and it is very, very, very far away from that. In my opinion. I think, I think some tasks, like call center, like, I think are much closer to being automated than some more complex tasks that require, like, really deep thinking and analysis. So I think I’ll be very careful about making any predictions about like displacing humans. I am much more on the camp of like augmenting human capabilities and and, of course, there is, like, some tasks that are very, very mechanical, that that that you’re not perhaps needs human and human to do those tasks anymore, but, but hopefully these humans are doing something else, right. They are not just doing mechanical tasks. Long, long time ago I had, like, when I was in school, I had the teacher, a professor. I remember this up to this day, he told me, he told the class, right, if you’re trying to solve this problem, and then at some point, you get to an equation that the computer couldn’t resolve it for. You can leave it. You don’t need to solve the equation. And I feel that’s the thing, right? Like, we want to use the employees and humans to to think to the point that that becomes mechanical, and then, but, but we, I don’t see AI, is still getting to that, to that point, right? Like, what is the building that we need to build? What is the breed? What are the cases that we need to litigate? What are the stories the journalists wants to cover? All that really requires, like human judgment and intervention, but But then, once you decide, journalists decide on the story they want to write, then they can use AI for a lot of the research, for writing, like an initial draft of the tax, for doing a lot of the tasks that would perhaps like take them a lot more time, but now can be easily automated. So I think, I think that’s how I’m framing. Some people are much more like, I guess, radical than I am, but I feel, I feel that’s the better use for it, and then I think that’s like the use that really empowers people, as opposed to thinking that it would be able to replace people. I don’t, I don’t think so.
Kris Safarova 34:53
Marcus and what technologies excite you the most? As a CTO, AI, quantum computing, anything. You want to mention,
Marcus Fontoura 35:01
I think, I think those two are, like, AI is, of course, very interesting, and we’re talking about this like a lot of people, like including Bill Gates, saying this is probably the most impactful technology they saw in their lifetime. And certainly for me, right? Because just the variety of tasks that we can accomplish with it. Like, it’s so great before AI, like, I was pretty excited about search. That’s why I dedicated so many years of my life for search. Like, being someone very interested in education and technology and research, right? I think having a tool that really can automate knowledge transfer and summarize like knowledge and and really make, make like information more available to people. I think was, was very important and very, very relevant problem for me that they wanted, really to solve. And I think we did, like, a great job in search, and then, like, I think now AI is just the next step. And when you talk about quantum, people in general feel that quantum is really a faster computer, or like a bigger computer, but it’s really not it like, in fact, if you compare, like, the specifications of quantum computer to a classical computer. The quantum computer, in by many measures, is smaller and is lower than than a classical computer. But for certain types of problems, the quantum computer would be able to do a much better job than then. Then the classical computer, because it the way it operates like. It’s very different than way a classical computer operates. We’ve probably heard of bits like zero and one. You can say like computers are 01 so everything we do in the computer is basically computing functions of zeros and ones, while quantum computers, they work more on, on vectors, right? And then, like, so this is like, a lot more degrees of freedom, and then they can do a lot more interesting operations for certain types of problems. And this, this tend to be the problems that are very hard to solving in classical computer, because they will take exponential time. Like it, like when you when I say exponential time, like, think about true to the power of n, when n can be very large, like, at that point, like, if you write a program that takes that amount of time, it will never finish in a classical computer, but it can. It can finish very quickly, in some cases, in the quantum computer. So I’m very excited to about the advances we’re seeing in quantum computer because in quantum computers and quantum computing algorithm, because some of the key problems, like that, that you can model, like even the natural systems, even quantum compute, quantum mechanics itself, biology systems and all that. They tend to be of the nature that they can be greatly accelerated by quantum computers. So that would be an innovation that probably can bring a lot of fruits for, like, all the basic sciences. So that would be super interesting, and also all the applications of AI for simulation and of like biological system, protein folding, all these things, I think we it’s an area that we are seeing a lot of innovation happening. A lot of the recent Nobel Prizes in Physics and Chemistry were awarded to people working on AI and simulation because of that ability that we have now to instead of like having to test like, biological changes in the lab that take a lot of time to happen, we can just simulate it in the computer and make it happen a lot faster. So we can make this proportionate progress on some of these experiments that were very hard to do. So I am super excited about that. But I do feel that more important, that the technologies right is more like what types of applications want to build. And I feel that there are applications that I think we should as a society, require that you work on those things. Like, for me, like, one of the things is like, self driving cars is one example of like I see potentially a very big impact in many aspects of our life, right? Like first reduction of accidents and deaths, lower cost, less traffic, less pollution, probably like some impact on how we do urbanization in cities. We can have less parking lots, more green spaces and green areas. And, and this is one, one of the problems that we have the technology, and the technology is there, but, like, the application is not there, right? Like, I think we as a society didn’t say, like, let’s invest in in this and go full, full force, deploying, like, self autonomous vehicles, and this is seems to be still a few years out, but I thought that would be already there, like when I saw the technology, like maybe 15 years ago. And then it seemed to me that we are closer, and it seems to me that we have most of the technology, but we just don’t have the wheel power to from government and from society, to fully deploy it. And so that’s one thing that to me to be like, our generation manhattan project could be like, let’s just really, really deploy autonomous vehicles everywhere, right? Like it could have, like, a huge impact in the economy, create lots of jobs in like, huge potential to build better infrastructure, solve some affordability problems for housing and so on. So I’m really bullish on that, like,
Kris Safarova 41:18
and I also wanted to ask you so many leaders, of course, struggle with the pace of AI change. Could you maybe help our listeners differentiate between something that is hype versus something that is actually actionable opportunity for the organization?
Marcus Fontoura 41:35
Yeah, I think that that is hurting in general, right? Because some of the things, especially now, because it’s very easy to build a prototype about something but, but there is no meat behind it, and then, and sometimes it’s not scalable, and so on, right? So I feel a good approach now is, for a while, has been the idea of like doing more MVPs, right? Like you do a small prototype, but a small prototype that is deployable, and you can measure the impact, and you can deploy and really validate that you have the idea right, and you have the right unit economics for your your idea, and that it scales. And I am very, very hesitant about like doing things that are not very data driven, and a few like things that tend to be more hype, like, they are just hype, right? We don’t have data to back it up. But I think, like the things that tend to work well, like, are the projects that they start small, we can deploy, we can see the impacts, and you have this feedback loop, right? That, like, we get the data, we reiterate, we make it better. We make improvements. Make it better. Make Improvements, make it better, and gradually, really scale it full force. And a lot of the ideas, like, they they are basically they have no substance, because, like, we cannot really think about a way to deploy it to a scale that makes sense or the cost is too high, so a way to validate the idea early on and validate the unit economics is right? Is like, super key for any startup, right for me, if I’m doing due diligence in startups like I want to get to, what is the unit economics? How is it going to scale? Do you have this structure to deploy? How many resources does it consume, and does it pay itself? Like, with the the ability that we have to scale it, and that’s a simple, economical analysis, but like to get to the basics so that we can get that to that analysis is key for me,
Kris Safarova 43:52
Marcus and I also wanted to ask you for leaders listening to us right now who want to integrate AI and automation, let’s say, within a consulting practice, but are very concerned about sensitive client data going into llms. What would you tell them?
Marcus Fontoura 44:10
So this, if these llms, they are pretty pre trained, right? So, like, I think when, when you try to understand how these things work. The P in chat GPT means pre trained. Mean that means that the LLM is already trained so like and so normally, when you use an LLM your data, it will not flow back to the base model that was already pre trained like it to not it will be localized to your organization and and the model the LLM is just train over public data and data that is available in data sets that the model. Owners have access to, but not your data, right? So, so basically you can, you can think that LLM is being the your model is being trained to two steps, right? One is like the LLM, that is basically global knowledge about how language works. So it, it read for 50,000 years, all the texts available in the internet about English and all that, but it didn’t read your organizational documents that are proprietary to your organization. So then when you do the training for your documents, it will use all the LLM knowledge as a basis, and then you are read, and then you’re training on top of that, but just providing the extra knowledge, right? What? What is your company about? What are the operational procedures in your company, how you do business like? What is your business plan? What are like, the HR documents, the legal documents, the rules that you have to abide to in your company, those things will not never flow back into the llms. So this risk actually doesn’t exist. And actually training these llms is very, very expensive, takes very, takes many, many months, and it’s not done over proprietary data, of like, in any particular company. And you see, like, lots of, lots of use cases of like use that. Some newspapers, for instance, are suing, like, some of the model owners like saying, like this, we believe that this version of the model was trained over proprietary data. And if that’s the case, then I think they have to retrain the model. But like, for the clients of like these models, like the data will never flow back. And by the way, the only thing that to flow back to the model is like, when you say, like, this result was good or bad, bad and all that, like, you’re clicking thumbs up, thumbs down. They have to go back as a feedback loop so that they can adjust, but your data will not no not flow flow back.
Kris Safarova 47:10
Thank you. Marcus, I want to wrap up with one or two of my favorite questions. The first one is, over your entire incredible career so far, what were two, three aha moments, realizations that you feel comfortable sharing that really changed the way you look at life or the way you look at business. Yeah, I think,
Marcus Fontoura 47:30
I think, like, there were, like, many, many moments that the thing I learned a lot in my career like, and normally those are like situations that you put into a lot, a lot of stress, or like that, you had to do something extraordinary. But one of the examples that I always tell is, like from the beginning of my career, we are trying to build a system in IBM, so that was my first job, and and management really didn’t believe that the approach that we had, like me and this other junior engineer that working on the problem was right, and then they wanted us not to work in the problem, but, but somehow we we continue and proceeded, and, and, and That turned out to be a very, very successful project, so that so so then I think, like, what was important for me is that that happened early on in my career that gave me the confidence to really trust my instincts. And I feel a lot of people ask me, like, what, we need to do to be more successful and to progress in my career and for me, like this idea of taking risks but taking mitigated risks, I think, always come to mind and and something that I was able to develop early on, I think like this idea of trusting your technical judgment but being able to have a good intuition when it’s worthwhile to invest in this project, and even though it might be outside forces telling you not to or might be very risky, but if you have an instinct that it can be Very high reward and that you can solve it. I think it’s like this, moments of like this mitigated bats that that really can pay off and really boost your career. And at least for me, it happened. It happened a couple times, and it boils down to the idea of like, good technical judgment, right? So like, I think you have to have, like, good technical judgment to discuss, to really trust what meant to when to risk it, and, and, and to me sad, because I see a lot of very, very smart people that that they work in projects that are really i. Are projects that probably not the best fit for their skills, or that they are just comfortable in this project and not really pushing the boundaries. And I feel that really to have a successful career, you should strive to really invest in projects that really can add value based on your skills, and then really push the boundary and not be afraid to fail, definitely.
Kris Safarova 50:26
And if you could instill one belief in every listener’s heart, what would you pick?
Marcus Fontoura 50:32
I think the main, most important thing to me these days is that like we can influence technology. Like technology shouldn’t influence us. We should influence like what we want to see in our society and really take accountability like over in the same way that we can vote and influence what the government does. Somehow try to influence what the government does for us. We also should think that you know algorithms are also institutions, and we should, like, feel empowered through to try to invest those institutions to build like a better world for us. Definitely.
Kris Safarova 51:12
Marcus, thank you so much for being here. I really enjoyed our discussion. Where can our listeners learn about you by your books, anything you want to share.
Marcus Fontoura 51:20
Kris, Kris like thank you so much for interviewing me. This is a great conversation. I have my website, fontura.org, that has my last name.org. There is a lot of information about me. The books I am also very active on LinkedIn is the only social media that I am part of, but I try to post regularly there and then, like, feel free to connect and follow and then, and the books are available anywhere the books are sold, just search for my name and then, and then, I would love your feedback in in any of those.
Kris Safarova 51:56
Thank you so much. Marcus, really enjoyed our discussion.
Marcus Fontoura 51:59
Thank you so much. Kris, thanks for your time. Very happy to be here.
Kris Safarova 52:03
Our guest today again was Markus van Tora, who is currently in his second tenure as technical fellow and corporate vice president at Microsoft, where he works as CTO for Azure core. He’s also the author of human agency in the digital world and a platform mindset, and you can get key action items and insights from this episode at firms consulting.com forward slash action. You can also get access to Episode One of how to build a consulting practice at firms consulting.com forward slash build. You can download the free overall approach to well managed strategy studies at firms consulting.com forward slash overall approach. You can also get McKinsey and BCG winning resume example, which is a resume that led to offers from both of those firms. Great example to take a look at if you’re currently looking to change jobs, looking for a new role at any level of seniority, you can get it at firms, consulting.com, forward slash, raising my PDF. Thank you so much for tuning in, and I’m looking forward to connect with you all next time.