Barriers in AI adoption for Government

Barriers in AI adoption for Government

There is no doubt that Governments are turning towards data centric approach in making pivotal decisions in their nation’s interest. With the growth of the data all across the IT systems, there is also exponential growth in the capabilities of Data Utilization Technology in preparation of the 4th industrial revolution. Data Utilization Technology (DUT) refers to IoT, Cloud Computing, Big Data and Mobile (ICBM). The greater leaps in Artificial Intelligence (AI) that we are seeing will certainly be useful when applied for deriving value from the collected data post DUT stage.

 

Governments and their citizens however are yet not fully convinced that AI is safe, unbiased and good for the country. Many countries are coming up with their National Strategy for Artificial Intelligence where they are trying to non-negotiably address these genuine concerns of AI. Some of the fears and measures that governments are struggling with are underlined as under:

The proliferation of AI will provide convenient services and contribute to solving the problems that our society is encountering. But the diffusion of new innovative services will also be new conflict factors among various stakeholder groups

Legal Hurdles

 

1. Bigdata ownership & protection

   Bigdata is characterized by Volume, Velocity & Variety (3V). The biggest producer of big data in the world today are social media channels (Facebook, YouTube, LinkedIn etc.) One of the complex issue is who owns the data? Is it the platform provider? Whether provider has taken consent from user to share the data with others or in applying analytics to the user’s data and to what extent? Whether the result from profile analysis shall be marketed or whether there is also a chance of compromising interest of the individual or society through commercialization of this data. 

Since AI can also create content, which may be creative as well. Would current copyright law be applicable to it? Another challenge arises from the convergence of processes and laws applicable to it. Now a days creative work is also available in digital format stored in database. Whereas in some countries the distribution of data lying in database is protected for 5 to 10 years, copyright creations are protected for much larger period. Can copyright law be applicable to creativity from AI?

 

 

2. Antitrust/ Competition Laws

   Only some frontier organization have the money and infrastructure to hold the data and process it as per the limitation of their imaginations. How governments can prevent the market monopoly of these companies which have all the resources and tools to derive value from data. The advantage that these organizations hold is obviously unfair to consumers and competition.

 

 

3. Strengthen Data Governance

   National Statistics Offices are often dependent on other departments for data, and a lot needs to be invested to strengthen them. The investment should enable them make automated data collection, establish necessary infrastructure and hire skilled resources for data processing, adopt necessary standardization and quality assurances, strengthen data security and overall management procedures.

 

 

4. Regulation to prevent misuse of personal information

   Departments collecting individual data need to take all measures to prevent leak and misuse of Personal Identifiable Information (PII). For example PIPA (Personal Information Protection Act of South Korea) has been amended to anonymise and pseudonymise data. Personal data controllers can separately maintain separate data that when combined with pseydonymised data produce individual data. Further processing of pseudonymised data  for individual identification is prohibited. 

 

 

5. Civil and Criminal Liability

   If a vehicle where AI function plays the driver role runs into an accident, who shall be held responsible – the driver, the car owner, car manufacturer, AI or AI programmer? AI is not a person and is possibly not supervised in all cases. However it takes decision based on data which are unpredictable to owners of AI or may be autonomous where programmer had no role to play. The conventional civil liability laws are insufficient to cover damages caused by AI.

 

The same holds true for criminal acts. While there is no doubt that the person who intentionally uses AI to commit a crime is liable and his acts are punishable. However, where the decisions are autonomous and unpredictable to owner of AI, the owner or the programmer of AI cannot be held responsible.

 

 

6. Discrimination and Bias

   AIs decisions are dependent on the data on which it’s model is trained. If the data itself is biased, the decisions of AI system are likely to be biased and discriminative. The common AI APIs are held by few companies, and are available at a cost. The principle of making technological capabilities equally available to public at large (disadvantaged or unskilled) shall obviously come at a subsidy cost to government.

Operational Hurdles

 

1. Understanding of Data

   Traditionally government departments are structured to deliver on governance. These departments are not yet quite technology savvy, and dependent on professionally managed organizations engaged for the task. Many departments have taken to digitalization. The growth of databases has been so immense that it is now getting harder for departments to even know what is flowing in their database in day-to-day operational activities. Further the departmental structure still lack key IT people such as Chief Information Officer, Chief Data Officer, Chief Information Security Officer etc. who can give direction to department’s IT efforts. Organizations that do not possess the capabilities to understand and manage their data cannot take advantage of AI.

 

 

2. Data and AI skills

   AI and data management skills are in short supply in government. The skills are costly and too technical for government officials to spare time and energy. Other than this, government officials are also concerned about ethical and legal ramifications of unintentional consequence of misuse of data. Silos between functions of governments also limit the applicability and usefulness of building data driven applications.

 

 

3. AI Landscape

   While there are some selected players when it comes to data collection and harnessing, it is not the case with AI. There are numerous small players which have significant know how and expertise. The challenge with smaller players is that they may not have deep pockets to scale solutioning to large government projects. Further newer organization may not have experience working with government and may find it hard to deal with government processes.

 

 

4. Legacy Culture

   Government investments are subject to public scrutiny and therefore there are established processes and practices. Government employees are driving by their ability to contribute to larger impact, however they may find it difficult to adopt to new transformative technology such as AI. These cultural issues make it difficult for government to be agile.

 

 

5. Procurement Challenges

   Government procurement cycle tend to be longer, whereas AI algorithm and technology has much shorter shelf life. Government would prefer to own and customize AI products for longer duration, whereas most of the AI products treat their algorithms as IP and may be reluctant to do so. As data is continuously ingested and demands from policy or other sources keep coming in, AI products owned by smaller companies may not suitably fit to address all requirements.

 

 

Governments certainly need to adopt new operating model to stay technology relevant and agile. The governments are supposed to be just, non-discriminative, punish criminals and civil offenders, and work towards the interest of people. Before governments encourage wider adoption of AI at large scale, it should do some due diligence and put strategies in place to operationalize it. 

Further Readings

Leave a Reply

Close Menu