Outdated education paradigms must be revamped to reclaim the all-important human quality: agency.
Sanjay Sarma, CEO, President, and Dean of the Asia School of Business, Kuala Lumpur, Malaysia and the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor in Mechanical Engineering at the Massachusetts Institute of Technology (MIT), shares insights on the artificial intelligence (AI)-agency revolution and how the human brain works.
As someone who has been highly influential in the education arena, and who helped establish the Singapore University of Technology and Design, and then served as the first Director of Digital Learning at MIT and later as MIT’s Vice President for Open Learning, what are your insights on existing education systems, specifically those associated with continuing education?
Education systems today are based on a somewhat outdated construct. Previously, you acquired a skill and diligently practised it throughout your life. For instance, if you learned cobbler skills, the technology remained largely unchanged over time. But we are in a different era now. Initially, it only affected some sectors of industry, such as technology. For example, the way you design a semiconductor chip now is very different from how it was done 30 years ago. But it did not affect, say, accounting much because the only things that changed in accounting were new regulations and new apps.
However, over the last 15 years, we have arrived at a point where job transformations have become life-changing. We need to consider the concept of “personal obsolescence”–a somewhat chilling term. And unfortunately, the education systems of today are not prepared for that rate of change. They are still designed for the ‘one and done’ model, in which your secondary or tertiary education prepares you for life. We have no real formal structures for continuing education–in fact, the term “continuing education” has a vocational whiff like retraining someone to use a new machine tool or repair a new kind of air-conditioner. But if you had to learn, say, AI, where would you go? You may just have to go browse YouTube. And how do you certify yourself? How do you prove yourself? There is nothing there. To me, this is an existential moment of our times that is desperately important to address, because AI is leading to a rethinking of work, the workforce, and workers. It won’t necessarily show up as layoffs but as deferred hires to begin with, and then if we don’t respond rapidly enough, as job losses. To stave that off, we need to take on a formal approach to quaternary education: the lifelong, white-collar follow-up to primary, secondary and tertiary education.
While AI is changing how we work, addressing global issues such as climate change and geopolitics will also place unprecedented demands on the agility of the labour market. Consider the fact that the efforts to keep global temperatures increases below 1.5ºC are faltering and climate mitigation and adaptation are becoming increasingly urgent. A raft of new technologies, such as smart grids and direct carbon capture, will need to be developed, deployed and operated. This means new skills will have to be learned urgently and on a massive scale. Continuous education is thus probably one of the most important fronts for human development and sustenance.
What recent developments in education and technology do you find troubling?
Human beings are uniquely adaptable compared to other mammals. We have an 18-year period of mental and physical growth from infancy to adolescence. In fact, the prefrontal cortex grows all the way through early adulthood (compare that to a zebra calf, which can run a few hours after birth). Our adaptability comes from this growth period, and mental nurturing is what we call parenting. This is the natural origin of the human capability–and indeed need–for learning. Over the centuries, we have formalised our education system based on historical precedent. But our systems today are ‘path-dependent’–they are not necessarily what we would have designed if we had known how the brain worked. Compare medicine in the 19th century to medicine today! Unfortunately, this historically evolved system is not necessarily ideal–particularly for our urgent new needs.
The sudden acceleration of the capabilities of AI has shocked many of us. What is even more unsettling is the quest to make AI agentic: in other words, capable of doing tasks autonomously rather than as a tool that a human uses like ChatGPT. The irony is that human learning has been gradually shorn of agency at the very time we are trying to make AI agentic. Human learning needs to return agency to the learner–as many of us have been pleading for decades. Project-based learning, experiential learning, flipped classrooms and other trends are small steps in this direction. But many of our learning practices are based on a key misunderstanding of human learning: that the teacher wields a pen, and the learner’s brain is a sheet of paper. Rather, the learner is actually building a model of the topic, and the teacher should see themselves as a facilitator and a nourisher. That is a central component of agency in learning. And trying out what you learn is a second component of agency. Without agency, practical mastery is difficult, especially at the scale we are now going to need. But old practices and dogmas are difficult to shake off.
Technology can help in several respects. Online education is an obvious approach, but that is only one piece of the puzzle and not the solution. In fact, the failure of pure online unfolded when folks tried to treat it like the exclusive panacea, which it is not. It is merely a tool. What I have argued for is flipped classrooms, where lessons that would have typically been taught in the classroom are instead delivered through online, and classroom time is dedicated to application, exploration and coaching as students take on tasks– an agentic approach in which a flipped classroom meets something called deliberate practice. Teachers in this model are nurturers, nourishers and coaches.
AI gives us tremendous new possibilities. It is like that garage wall behind my parents’ home where I learned to play tennis; it was my greatest tennis partner. In the same way, AI becomes this coach–a mentor that can do a lot and then grade your work. Augmented reality is another tool that offers extraordinary possibilities. But none of this will work if we don’t change the underlying system with sincerity and commitment. In fact, I fear that they will be used in a gimmicky and ultimately ineffective way. The hold of existing systems is too strong to swim counter-current. However, the good news for adult education is that because we have little in place right now, we have an opportunity to build something right anew. System change is generally difficult, and innovation is often best at the periphery. But overall system change will require all parties to pitch in: the government, to change regulations and offer incentives; teachers, to embrace new practices; companies to accept this for adult education; and for younger learners and parents, to buy into this departure from the practices they grew up with. Starting with adult education gives us a self-contained blank slate clear of legacy.
I also find it ironic that educational systems and practices in many countries in Southeast Asia have largely remained unchanged since these nations gained independence from former colonial powers. There is a sort of over-reliance on colonial constructs that are no longer valid, and a loyalty to systems that the erstwhile powers are themselves trying to shake off.
How can adults and retirees be enabled to continue learning to improve their lives?
We still look at adult or continuous education very transactionally, and tend to look for a return on investment in the short term. Instead, we need to recast it as a continuous process not unlike going to the gym to stay healthy. Think of a company that decides that it needs its people to be healthy and installs a gym on the premises, assuring its employees that they do not have to take time off if they went to the gym for, say, three hours a week. Societies need to do the same thing with continuous education. In fact, at the Asia School of Business, we are pivoting around something called Agile Continuous Education, or ACE. We are saying it is like a gym inside a company, but for learning. For this to be successful, you need to allocate time, budget, physical and mental space, and recognition to get your people up to speed. So, the next time there is a crisis from a disruptive technology, you will be better prepared, rather than being a deer in the headlights, which is what many of us are right now with AI.
Many countries have started doing some things right in this area. Singapore, for instance, introduced the SkillsFuture Credit scheme in 2015 to encourage its citizens to deepen their skills or reskill in new areas outside of their current field. Credits are offered which can be used on top of existing course fee subsidies to pay for a wide range of approved skills-related courses. Malaysia has something similar with HRD Corp. But this is not enough; the entire infrastructure needs to come together for things to click. It is an entire arch that needs to be constructed–you can’t just build a semi-arch. And this is the challenge we’re trapped in for education.
As MIT’s first Vice President for Open Learning (2012–2021), you spearheaded transformative learning initiatives such as edX and OpenCourseWare. Looking back, are you satisfied with the results?
I was satisfied with open learning results. But I had an instinct, which by and large I still believe is right, which is that online by itself is a glass half empty. For someone who is parched with no water, a glass half empty still contains water. But the full glass, the other half, involves the flipped classroom component I described earlier. I authored a book, Grasp: The Science Transforming How We Learn, which cautioned against betting entirely on online. At MIT, we made an equal effort to encourage experiential learning. This came easily to my colleagues because it is a central motto of MIT: learning by doing. We also extended this to the outside world. For example, we worked with refugees from Syria in Jordan: we offered them online courses backed up with in-person coaching and hackathons, followed by internships. In fact that’s where Agile Continuous Learning was born.
But if I had a regret, it is that the online message dominated, and our in-person message did not necessarily stick. Online appears so glamorous with all these high-profile courses. Numbers are easy to count, and we tend to count only what we can count easily. That’s sort of a failure in our society as a whole–we value that which we can count simplistically and not that which is important but uncountable. Hence my regret is that we didn’t double down on the half-empty part of the glass. This time around, I am emphasising it even more.
With Open Learning, we did make a lot of inroads into the science and practice of learning. We funded a lot of research on the neuroscience of learning, and it is still going well, but my regret is that it was the sizzle and not the steak that got the attention.
Do you believe that AI- and automation-enabled learning environments make it harder for novices to achieve mastery in fields requiring deep experience and deliberate practice, such as medicine? To what extent is this a matter of adapting our thinking around learning versus balancing trade-offs like expertise, convenience, and cost? How should higher education institutions address these challenges?
It depends on how AI is deployed in any situation. I use AI extensively and am on various AI beta programs because I want to “stare into the abyss”, so to speak. Furthermore, I do research in AI and augmented reality. My view is that AI can be an enormous help. But I also believe that we, the professional class, have a lazy tendency to look for pat answers, not nuanced ones. We would prefer to hit a button and assume it’s going to work in all circumstances. While I think AI can work in many circumstances, it will not necessarily work the same way each time.
Let me give you an example. I can say for sure that using Waze and Google Maps has hurt my sense of direction. When I compare my current state–I live in Kuala Lumpur today and take Grab everywhere I go–to the time I lived in various parts of Europe and had to walk a lot to get around, my sense of direction has become abysmal. So there’s no doubt that technology can stop you from learning and lead us to take all these for granted.
On the other hand, I’m helping out a family member with a health situation; it is nothing serious, just something that requires lifestyle changes. I have been using AI, and I have to say that I could not have found a better coach. I use a combination of ChatGPT, Perplexity, and Claude. I am always second-guessing AI in case it hallucinates, but I learned more in a month than the patient did in six months under the tutelage of a doctor.
So when you ask AI, take the answers it gives you and then go to the primary source; the results can be exceptional. It is like playing tennis against a wall. The only difference is that a wall is passive whereas AI adds information when the ball comes back to you. You can ask for a flat stroke or a topspin. Done right, AI can be absolutely exceptional. But my great fear is that we will keep looking for pat answers and end up doing damage by giving students tools like Waze or Google Maps, which reinforce lazy learning methods. Learning flourishes where learners face something called “desirable difficulties”.
AI is at an early stage of incredible acceleration. The DeepSeek bombshell is a reminder of the unstoppable and reverberating progress in this cauldron of innovation. We should not think of banning its use by students. Instead, professors have to figure out how to use it. They must think like epistemologists, questioning and understanding the nature of knowledge. One thing though: exams may have to move back to the classroom because the outcome of this interaction among the professor, the student and the AI needs to be measured for the student in isolation.
What are your thoughts on the growing trend towards vocationalism in higher education?
I have strong views about this. For me, this separation of the vocational and the theoretical has parallels to the class structure in our societies. You may find it interesting that MIT was originally dismissed as a vocational school when it was established just before the American Civil War. Its founding seal features a smith with a hammer on one side and a scholar reading a book on the other side. The motto under the seal reads ‘mens et manus’, meaning ‘mind and hand’. I would argue that to some extent, class warfare is mind versus hand, that is, people who work using their minds versus the people who work with their hands. But bridging the two is essential, because it does not matter how much you learn theoretically if you cannot implement it. My view is that there is a bit of a false dichotomy embedded in the question of vocational learning.
It is reminiscent of classifying people at an early stage based on a false understanding of ‘IQ’ (which has been discredited in recent times). I do believe that the cobbler’s son can appreciate ancient philosophy from Confucius as much as an accountant’s son can learn to program or master the artisanal craft of batik. We need to bridge these two views.
This class distinction is going to be one of our key issues with the AI-agency revolution that is coming, because those holding white-collared jobs who don’t want to get their hands dirty live in one world and the blue collars live in another world. But AI and AI-robotics will attack both these worlds, so both demographics will have to work together eventually. I don’t mean it as waging a war against the machines, but there is a need to invent a future where we take advantage of these tools–and each other.
What advice would you give to students today, both youth and adult, especially in Asia?
My advice is actually hard to follow, but it is what it is. It is about the race to obtaining agency as humans face off against machines. My view is that if you are doing a job that can be replaced, it will be replaced. In fact, the more agency, choice, and decision-making a job entails, and a human is educated to take it on, the more likely the job is AI-proof for the time being. Much of this has to do with the ability to conceive of something that isn’t in the data set that AI has scoured; we need to move outside the box that AI is occupying. Sound familiar? We have boxed ourselves squarely inside a trap of our own making, and the escape route is through education. And this involves agency, and enough of a rebellious spirit within ourselves to say things that are controversial or outside the norm. I would say that a world of agency and creativity is where we need to be, rather than a world that seems comfortable for now until the wildfires reach our doorstep. Or the seas, for that matter.