Skip to main content

This is your first free Insights article. For full access to Synergia Insights and our Experts Assessments, subscribe now.

ARTIFICIAL INTELLIGENCE

December 11, 2018 | Expert Insights

The accelerated development of artificial intelligence (AI) has brought it into the hands of the masses for everyday uses. AI-connected speakers alone are forecasted to be in 55 per cent of U.S. homes by 2022. Other smart home products sales in 2018 - like door locks, televisions and bathroom mirrors - are expected to be up 34 per cent from 2017. Overall AI spending is projected to reach $46 billion by 2020. As AI becomes more commonplace and increasingly widespread, it raises questions about the associated benefits and costs. 

Artificial intelligence refers to software and technology that uses data and algorithms to perform tasks that require human cognition. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Through machine learning AI-powered technology becomes knowledgeable and can surpass human efficacy. Human intelligence is considered to be the combination of diverse abilities. AI research has investigated the same premise by evaluating the following components of intelligence: learning, reasoning, problem solving, perception, and using language. There are varying levels of intelligence within the field of AI and as the field is studied and applied more deeply, these are likely to expand further. 

As AI comes to the forefront of technological advancement, its standards, ethics, and value are tested. The 21st-century technologies – genetics, nanotechnology, and robotics (GNR) – are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. Public opinion remains unresolved on whether the merits of AI outweigh its drawbacks as the debate around the future of AI continues. 

AI, as a field of study, began when Dartmouth College hosted a DARPA-hosted summer conference in 1956. Marvin Minsky, acclaimed AI practitioner, was in attendance at the conference. Minksy went on to win the 1969 A.M. Turing Award, the highest award fin computer science, for his pioneering work in AI. Turing was a leading cryptanalyst at the Government Code and Cypher School in England and is renown for the creation of the ‘Turing Test’, which proposes a way to address the question of whether machines can think. 

The benefits of AI have been widely extolled. AI enables better identification of patterns, allowing for more nuanced predictions of behaviour. From healthcare to recruitment to digital marketing, a variety of industries and sectors have found AI to be advantageous to their business operations. Across industries, AI can minimize the time spent on mundane and tedious tasks. It increases efficiency through faster decision-making time and reduced errors, potentially allowing increased revenue, decreased costs and additional job opportunities. 

As they explore the use of AI, it also expands the existing field of knowledge. For example, NASA used an AI design process to create an interplanetary lander concept, which would allow them to explore distant moons. In healthcare, AI is enabling decreased rates of diagnostic errors, more effective drug research and development and targeted health and nutrition recommendations. Healthcare-based AI start-ups have raised more funding than other sectors over the last five years. AI has also made its foray into the art world through AI-generated artworks, now being sold at Christie’s auction house. In AI-powered marketing, consumers benefit from targeted marketing suited to their preferences and companies benefit from more comprehensive data about consumer preferences. AI technology also has applications in defence operations and in preventing financial crimes. 

AI seems to be taking over a diverse range of industries but also brings with it a whole host of adverse circumstances. A key drawback of increasing AI presence in a multitude of industries is the economic hardship it brings to those replaced by AI technology. While the rise of AI has certainly resulted in the creation of jobs, it is also responsible for the displacement of jobs. AI replaces humans involved in mundane, administrative and repetitive work while simultaneously creating jobs for AI specialists and tech companies involved in AI. Studies remain unclear on the net effect on jobs but are unambiguous negative effect AI has had on income inequality, as the displaced jobs are predominantly low-skill ones.  As driverless cars appear on the horizon, so does the threat against the livelihood of taxi drivers and chauffeurs. AI might bring more wealth than it destroys, but it also carries significant risks of the uneven distribution of wealth. Along the same vein is the risk of unequal distribution of power. As some countries invest heavily in AI and make quicker advancements, it also enables the dehumanisation of actions, which could result in increased defence threats. 

The rapid rise of AI technology into the mainstream has also raised concerns about its true efficacy. IBM’s AI, Watson, is a commercial success but has also been found to recommend unsafe cancer treatments on occasion. In some cases, AI powered software is unable to match human instinct and can be baffled by simple situations that a child could interpret, making it an inadequate solution in these cases. 

Complete dependence on the algorithm to remain impartial can be detrimental as well. In the case of Uber, whose pricing algorithm is based in part on demand, it failed to adequately respond to a shooting incident that resulted in a spike in Uber customer requests, causing an outrageous surge in pricing for people trying to leave the area. Looking solely to the algorithm can prevent companies from deciphering exigent circumstances that necessitate a deviation from standard practices. 

Finally, there is the concern about bias in AI. AI technology was meant to eliminate the human bias that leads to errors but this does not always happen. The quality of AI solutions is dependent on the data and algorithms it is fed. However, unconscious bias in the AI design and development teams can result in biased AI. In decisions like sentencing and parole, or service areas for delivery companies, AI often reinforces racial bias. Biased AI leads to higher sentences for black defendants or limited service to areas with a higher concentration of minority residents.  

As countries evaluate the nuances of increasing applications of AI within their borders, ensuring smooth AI practices in the future requires the consideration of multiple factors. Risks associated with AI, cyber security threats, becoming a world leader in AI technology, identifying additional industries and markets that could benefit from AI and leveraging existing expertise will all be concerns for countries to keep in mind as they look to their future investment in AI technology. 

AI powered technology has a myriad of benefits with a far-reaching impact but addressing the concerns and risks it brings will be crucial to its future. Policymakers should consider creating AI-based regulations to ensure its benefits continue while minimizing its negative externalities. Technical collaboration with researchers and developers will be essential as the complex host of AI’s benefits and risks require policymakers to take a nuanced look at the field of AI in order to create effective and well-structured regulation. An efficient and meticulous regulatory framework can guide the progress of AI technology to address its challenges while continuing to positively impact millions of lives. 

A key incentive for policymakers and regulatory authorities to be intentional about policy changes is cyber security. As major instances of hacking and data breaches become more frequent, cyber security becomes an increasing concern for countries. Using AI technology can enable faster detection of communication patterns indicative of hacking. The uptick in AI-powered solutions in military and defence activities means that hacking also creates a security risk for national defence strategies. Ensuring strong protections and regulated practices will be crucial as countries seek to shore up their cyber-security vulnerabilities. 

However, there is also a strong incentive for policymakers to abstain from strict regulation as countries race towards dominating the AI game. Even as Europe makes plans to establish principles for the use of AI, it must monitor the progress of world leaders in AI. America and Chinese tech firms have thus far taken the lead towards AI innovation, putting pressure on Europe to hasten its development of AI software and compete at the global level. In order to both implement these principles while also augmenting its efforts to compete with the top countries, the European Union has identified alternative ways to dominate. 

Europe’s strategy for an increased presence in world AI advancements provides a strong framework for other countries to emulate. The EU has chosen to emphasize its focus on ethics by aiming for dominance in ethically responsible AI. As ethics becomes a prominent part of the AI conversation, other countries interested in the global competition can look to Europe. A focus on the ethical aspects of AI will also allow consumers to put greater trust in AI applications. Nations can also look to countries with more established AI services to determine which solutions have been most effective at addressing the challenges of AI. 

Like with ethics, other competitors can look to the untapped markets to become world leaders. Within America and China, a couple of tech firms (like Amazon and Alibaba) lead the cutting-edge advancements in AI technology. The EU has already noticed that these companies are mainly in the business-to-consumer market, making the business-to-business and public-to-citizen markets relatively easier to dominate. Focusing on these markets will allow the EU and other interested countries to gain a competitive edge in these infant markets while also getting first mover advantage. 

In addition, countries can look to their existing expertise and find ways to leverage it into the AI field. Malta, for example, became a world leader in block chain despite its small size and can use this proficiency to replicate the process with AI technology. Israel’s government has committed to increasing its use of AI in healthcare. By leveraging its large quantities of patient data, the country can use AI to gain insights, identify patterns and make recommendations. 

As India looks to improve its global AI standings, paying close attentions to the attitudes and approaches to AI by other countries can provide the necessary insight to cultivate its AI industry. India has a strong start up culture and can transform its traditional entrepreneurial model using AI technology. India is one of 20 countries to create a national AI strategy in the last couple of years and has differentiated itself through its commitment to social development. However, policymakers need to allocate significantly more funds to ensure sufficient education and research is undertaken in the AI sector. In addition, India needs to ramp up its efforts to improve its infrastructure as connectivity is a crucial factor in the advancement of AI. 

Countries like India need to further investigate AI to understand the benefits of using AI on businesses and the economy in collaboration with the corporate sector. As a productivity enhancer, countries can leverage its outsourcing expertise and infrastructure to increase AI usage and subsequently gain higher margins. India has seen growth in its AI industry and private sector investment. However, to adequately compete in AI innovation at the global level and crack the top five countries, India needs to evaluate its practices and identify ways to improve AI advancements while also increasing private sector investment. Becoming a top AI country is an attainable goal, but requires cogent thinking and strong implementation.