Artificial Intelligence has been occupying our minds a lot lately. For those of us shaping this space, besides incorporating it in our jobs to realize its efficiency gains or in driving AI powered products to market, there is the larger question of how it impacts our lives, society and humanity. The themes for its use seem straightforward - it aims to bring exponential efficiency and automation to human lives and can help advance technology, medicine, art, just about any form of human endeavor by leaps and bounds. The fears around AI however can also be catastrophic if realized and have lately dominated my contemplation on the topic.
The other day I used chatGPT to generate a summary bullet list of uses of AI and write them down. Here’s what I got on prompting about the advantages of AI:
AI offers numerous advantages across various domains and industries. Here are some key advantages of AI:
1. Automation and Efficiency: AI enables automation of repetitive and mundane tasks, freeing up human resources for more creative and complex activities. This can significantly increase productivity and efficiency in various fields, such as manufacturing, customer service, data analysis, and logistics.2. Decision-making and Problem Solving: AI systems can process and analyze vast amounts of data quickly, leading to more informed and data-driven decision-making. AI algorithms can identify patterns, trends, and correlations that humans may overlook, aiding in complex problem-solving and providing valuable insights.3. Personalization and Recommendation: AI enables personalized experiences and recommendations based on individual preferences and behavior. This is widely utilized in areas like e-commerce, entertainment streaming platforms, and personalized marketing, enhancing customer satisfaction and engagement.4. Improved Accuracy and Precision: AI algorithms can perform tasks with a high degree of accuracy and precision, surpassing human capabilities in certain domains. 5. Enhanced Customer Service: AI-powered chatbots and virtual assistants can provide 24/7 customer support, addressing common queries and offering personalized assistance. These systems can handle a large volume of inquiries simultaneously, reducing response times and improving customer satisfaction.6. Data Analysis and Insights: AI algorithms can analyze large datasets to identify patterns, trends, and anomalies, facilitating data-driven decision-making and strategic planning. This is valuable for businesses, researchers, and policymakers seeking to extract meaningful insights from vast amounts of information.7. Innovation and Research: AI has the potential to drive innovation and advancements across various scientific and technological fields. It can aid in developing new drugs, improving energy efficiency, optimizing transportation systems, and exploring space, among other areas.
On prompting about fears around AI, chatGPT spat the following answer:
Some common fears associated with AI include:
1. Job displacement: One major concern is that AI and automation technologies could lead to widespread job losses as machines become capable of performing tasks currently done by humans. People fear that their skills may become obsolete, leading to unemployment and economic inequality.2. Loss of human control: There are concerns about the potential for AI systems to become too autonomous and make decisions that humans cannot understand or control. This fear is particularly relevant in critical domains such as healthcare, finance, and military, where AI systems could have significant real-world impacts.3. Bias and discrimination: AI systems learn from data, and if the training data contains biases or discriminatory patterns, the AI system may perpetuate and amplify those biases in its decision-making. Concerns arise that AI could reinforce existing social inequalities and discriminate against certain groups, perpetuating systemic biases.4. Privacy and security: AI systems often require vast amounts of data to train effectively. This raises concerns about the privacy and security of personal information, as well as the potential for misuse or unauthorized access to sensitive data.5. Ethical implications: The development of AI raises ethical questions around issues such as transparency, accountability, and the potential for AI to be used for malicious purposes. People worry about the ethical implications of AI decision-making, including situations where AI systems may need to make choices that involve moral judgments.6. Loss of Human Skills and Creativity: Some fear that reliance on AI systems could lead to a decline in human skills and creativity. If AI takes over tasks that require problem-solving, critical thinking, and creativity, humans may become overly reliant on machines and lose their ability to perform those tasks.7. Unintended Consequences: AI systems may produce unintended and unforeseen consequences. These can occur due to biases in training data, the complexity of AI algorithms, or the inability of AI systems to understand context and make appropriate decisions in certain situations.8. Existential risk: Some individuals express concerns about the long-term future of AI, contemplating scenarios where highly advanced AI systems surpass human intelligence and control, potentially leading to unforeseen and undesirable outcomes for humanity.
![]() |
| Fig 2 |
When I had these clearly jotted down and compared the advantages to the disadvantages, it was hard to forgo the conclusion that the advantages seemed (mostly) to serve the interest of corporations (which company wouldn’t want better data analysis, automation and efficiency with far less people on their payroll?) whereas the disadvantages seemed more relevant to the human collective as a whole (loss of jobs, loss of control, privacy, security concerns, ethical considerations). Its not as black and white as that of course. AI can be used for advances in medicine which will benefit society in innumerable ways. And fears around loss of jobs around a disruptive technology do not take into account all the new jobs that will be created with a shifted paradigm resulting in the changing nature of work itself. Still, the broad, even if simplistic conclusion is something I found hard to shake off. The uncertainty around this space for the immediate foreseeable future is immense and we are ardently searching for guiding patterns.
A recent article in the New Yorker I came across ‘Will A.I become the new McKinsey’ feels apt. The article proposes a new metaphor for understanding the risks of artificial intelligence. Instead of comparing AI to genies in fairy tales, the author suggests thinking of AI as the management-consulting firm McKinsey & Company. Almost humorously, it suggests that like consulting firms, AI has ability to assist in achieving goals without taking direct responsibility. The article raises valid concerns about AI becoming a tool for capitalistic interests and increasing wealth concentration. It questions whether AI can ever be used to assist workers and promote economic justice rather than solely benefiting shareholders. The author strongly argues against the notion that AI is a neutral technology and concludes by highlighting the need for economic policies that distribute the benefits of technology appropriately to improve people's standard of living.
“The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”?
Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.
As it is currently deployed, A.I. often amounts to an effort to analyze a task that human beings perform and figure out a way to replace the human being. Coincidentally, this is exactly the type of problem that management wants solved. As a result, A.I. assists capital at the expense of labor. There isn’t really anything like a labor-consulting firm that furthers the interests of workers. Is it possible for A.I. to take on that role?”
![]() |
| Fig 3 |
Of course, there are much more draconian views on the future of humanity with the notion of a Runway AI that approaches a singularity with the inevitability of taking over human control and posing a threat to humanity itself. From another piece in the New Yorker:
“Many researchers involved in A.I. believe that today’s fake people are just the beginning. In their view, there’s a good chance that current A.I. technology will develop into artificial general intelligence, or A.G.I.—a higher form of A.I. capable of thinking at a human level in many or most regards. A smaller group argues that A.G.I.’s power could escalate exponentially. If a computer system can write code—as ChatGPT already can—then it might eventually learn to improve itself over and over again until computing technology reaches what’s known as “the singularity”: a point at which it escapes our control. In the worst-case scenario envisioned by these thinkers, uncontrollable A.I.s could infiltrate every aspect of our technological lives, disrupting or redirecting our infrastructure, financial systems, communications, and more. Fake people, now endowed with superhuman cunning, might persuade us to vote for measures and invest in concerns that fortify their standing, and susceptible individuals or factions could overthrow governments or terrorize populations.”
In late March, more than a thousand signatories which include stalwarts in the field of AI signed an open letter that calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Signatories included the likes of Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology. Several high profile AI experts have since been voicing their concerns and talking of the need for regulation in a space that is advancing at a pace that regulation is not even geared to catch up with.
![]() |
| Fig 4 |
While a reaction of this sort seems as apt as warranted, its also increasingly evident that given what’s at stake and the impossibility of chasing a genie that’s out of its bottle, rather than a movement to stop the development of AI, it may be more fruitful to chase a clearer definition of what constitutes a path of success. Unbridled optimism also abounds in tech corners such as the view from Marc Andreesen on how AI unleashed will net society only benefits. Whichever side of the debate one tends to fall on, given that open source models have now become widely available, and given a corporate arms race that is geared to advancing the technology at breakneck speed, the only way AI seems poised to develop is onwards. The best we can do as a collective is work quickly towards regulatory frameworks, consortiums, policies and initiatives that address the threats of AI head on and steer the use towards authentic, transparent and safe protocols that are held to audit and accountability standards. This includes transparency on details like provenance of training datasets, assessment of algorithms and control mechanisms, much like those found in the Algorithmic accountability act introduced last year in the US Senate. Regulation and oversight of the creators of AI technologies is necessary to ensure that necessary conformance to what is acceptable and not acceptable in terms of societal outcome, especially in high stakes avenues such as education, medicine, mental health, security and employment.


