This article was originally published in Macau Business and authored by Emanuel Roth Soares, Coordinator of the M Venture Desk at MdME. In partnership with the Macao Young Entrepreneur Incubation Centre, Emanuel provides mentorship and consultation for its members.
“Machines take me by surprise with great frequency.” - Alan Turing, 1950
I may be around two years late to the party, but artificial intelligence has been on my mind a lot this year. Things are moving quickly and are now accelerated by strong social and political currents.
Not since the Spielberg film of the same name has AI held so much cultural relevance. In geo-political terms, we’re seeing a new superpower arms race take shape, this time with computing power. As the US government attempts to throttle AI competition through export controls - in what many analysts believe is the key battleground of a new trade war - the meteoric and symbolic rise of DeepSeek has supercharged competition in the AI market. With new and better language models now coming out across the world on an almost weekly basis, there is every reason to expect that the AI landscape will be completely different a year on from now.
In this corner of the woods, much hype was generated in early March when the head of the PRC’s National Development and Reform Commission, Zheng Shangjie, announced the creation of a new state backed fund – the State Venture Capital Guidance Fund – which will focus on investment in cutting edge tech sectors, including advanced chip design, quantum computing and artificial intelligence. Here in the Greater Bay Area, aspiring entrepreneurs are excited to see how newfound policy support and hard capital trickles down to our local ecosystems.
What seems certain is that the central government is shifting its considerable political and economic weight to focus on AI as one of the key strategic economic sectors of our times, much as it has previously done with, say, renewable energy and electric vehicles. There is now a real recognition that the exponential growth in computing capacity will fundamentally reshape technology, the economy, civil society and even the military, and whoever leads the way globally will gain a crucial strategic advantage.
Like many others, my own field of work - the legal profession - is grappling with the consequences of this rapid progress. AI is no longer a distant possibility; it is already here and is reshaping the practice of law. From contract analysis to legal research to predictive analytics and case management, AI tools are enhancing efficiency, accuracy, and accessibility. AI-powered platforms can now comb through thousands of documents in minutes, identifying relevant contract clauses or potential risks with precision that rivals—and often surpasses—human capability.
While implementation of AI into our daily life and workplaces brings with it considerable benefits and opportunities, larger philosophical questions abound. In an era of misinformation, polarization, loss of faith in institutions, reinforcement of biases and other such trifling annoyances, AI hasn’t (yet) provided us with the required solutions, and may even exacerbate some of these problems.
At a corporate AI-training session I attended recently, as we were discussing our AI generated responses, one of the attendees posed the question – won’t these answers just reinforce the biases found in the available data?
This was, in fact, a good observation. While these AI platforms seem quite capable in their ability to reason and provide us with convincing answers, at their core, they function as tools for analyzing large quantities of data and generating answers based on model training and probability metrics (i.e. the correct answer is the one most statistically probable one).
At its most reductive, an AI language learning model has no innate sense of morality. There is no reason for it to question or pass judgement on the biases or flaws inherent in the information upon which it was trained, and all content generated by it will then be fed back into the pool of available information, ultimately creating a self-reinforcing loop.
In response to this fundamental nature of AI systems, AI ethics is a fascinating and increasingly relevant field of work. It grapples with questions such as - How can we align AI values with our own? Is it useful or appropriate to do try to do so? How do we determine the correct “values” which AI should attempt to reflect?
As a result, AI models are not neutral, the values given in the training of a given AI model (even just the language it is trained on) will inevitably steer it to favor certain cultural, political, or linguistic biases and stereotypes. Is this necessarily a bad thing? It now seems likely that we will end up with a diversity of competitive AI models which will be useful for different people for doing different things.
While no one has the solutions to all of these challenges, embracing how to best integrate AI into the fabric of our daily life should be a top priority on everyone’s (AI generated) to-do list. What is certain is that AI is fundamentally reshaping how we write, think and search for answers to our questions, in a way which will make our lives easier and ultimately better, as the great technological leaps of history have done before.
Finally, in the spirit of full disclosure, after being approached to write on this topic I asked one of the leading AI platforms to write the piece for me. Within seconds, it had produced a very nicely structured summary on the topic, clear, concise and addressing all relevant issues, an adequate synthesis of everything that has already been said on this topic ten thousand times before (I’ve kept one of the paragraphs here, you can probably guess which one).
My problem with this AI generated content is that it is still too evidently machine generated, too mechanical, carefully adjusted, sterile, devoid of personality or flaw. It does make me wonder, there is a very possible near future where most of our daily reading material is fully generated by AI, perfectly calibrated to our interests and preferences, efficient, effective, safe, and boring. This is a distressing thought indeed.
In an age where we must face up to misinformation, job displacement, privacy concerns, killer robots and crippling existential risk brought about by the advent of our new AI overlords, being, well, sort of bland - may perhaps be AI’s biggest sin of all.