By Minna Ruckenstein and Julia Velkova

At the level of Finnish central government, automation is predominantly discussed in relation to Artificial Intelligence. The government’s goal is to pool together societal-wide resources to foster developments around AI and automated decision-making. The governmental AI Programme consists of concrete initiatives to boost economic growth and to revitalise companies and the public sector. An ethical information policy—commissioned to address questions of data ownership and the effects of automation on Finnish society— will be finalised soon. Civil society actors, in turn, are concerned with ethical uses of data that underpin automated decision-making. A key actor in the process is an international non-governmental organization, MyData Global, which grew out of a Finnish data activism initiative. Other issues on the agenda are the discriminatory nature of credit scoring and clarifications of mistakes made by automated processes related to tax assessments. Such clarifications are important in the light of ongoing automation projects in the public sector. Social insurance institutions, among others, are struggling to resolve the challenges of ensuring compatibility with existing laws, and with the difficulties in justifying the logic of automated decision-making processes. The start-up sector for ADM solutions is growing rapidly and involves work on a variety of projects including prospective employee personality assessments, health diagnosis, and content moderation of online discussions.

Political debates on aspects of automation – Government and Parliament

The Artificial Intelligence Programme

The Artificial Intelligence Programme [FI 1], commissioned by the Minister of Economic Affairs, was launched in May 2017 and the final AI strategy is due in April 2019. To implement the work of the initiative, a steering group was established, chaired by Pekka Ala-Pietilä, CEO and co-founder of Blyk, and former president of Nokia Corporation and Nokia Mobile Phones. The group's first report, published in October 2017, introduced AI as an opportunity for Finland to be among the winners of the economic transformation that is believed will revolutionise industries and organizations from transport and healthcare to energy and higher education. The AI Programme describes the application of AI as a societal-wide pressure for rapid transformation that offers opportunities for economic growth and renewal for companies and the public sector, if the opportunities are dealt with in a systematic manner. One of the stated aims of the programme is the formation of “a broad-based consensus” of the possibilities of AI to foster “a good artificial intelligence society”. A broad consensus on the matter is seen as essential, because of the limited resources of a small nation. Therefore, the contributions to the AI field must be implemented efficiently with a consistent emphasis on economic impact. While specifics of automated decision-making are not mentioned in the AI programme, the simultaneous improvement of service quality and the level of personalization, alongside expected efficiency gains, point to expectations of considerable automation in the provision of public services. The AI programme foresees efficiency increases in service provision as the main benefit for the public sector, in particular when it comes to the provision of healthcare services. With the help of AI, services provided by the public administration become free of the confines of time and location. Digital services can utilise appropriate information at the right time, and hence proactive interventions can be made to enhance citizen wellbeing. In short, AI is expected to help the public sector to predict service needs, and respond in a timely manner to each citizen’s needs and personal circumstances.

Key points of Finnish AI strategy

The proposed actions for the Finnish AI strategy emphasise the competitiveness of companies through the use of AI [FI 2].This goal is supported by various proposals, for instance, by ensuring that Finnish data resources are accumulated and enriched systematically, their technical and semantic interoperability is ensured and that datasets are utilised in all sectors of society. The adoption of AI is simplified with “platform economy trials” that bring together companies and research facilities for the piloting of AI solutions. Such piloting is supported by an independent facilitator—the CSC IT Center for Science, a non-profit organization owned by the state (70%) and higher education institutions (30%)—which offers computational and data storage resources. In terms of public administration, the AI strategy promotes public-private partnerships and the renewal of public services. In addition, new cooperation models are being established to boost digital service development. The goal is to build public services that are the best in the world, always available and in any language needed. AI applications are developed to better anticipate and predict service needs of the future. The strategy promises that time consuming queues and telephone appointments will be eliminated by the use of personalised services and digital assistants. Work towards that aim, however, has only just begun.

Elements of Artificial Intelligence

The strategy work emphasises that Finns must have access to AI literacy — a guaranteed basic understanding of AI principles. In order to support this goal, an English-language online course called 'Elements of Artificial Intelligence' [FI 3] was developed by the Department of Computer Science at the University of Helsinki in partnership with the technology company Reaktor to form part of the Finnish AI Programme. The two parties involved produced the course work pro bono. The course introduces basic concepts and applications of AI and machine learning with the aim of increasing public understanding of AI and better equipping people to participate in public debates on the subject. The societal implications of AI, such as algorithmic bias and de-anonymisation, are introduced to underscore the need for policies and regulations to guarantee that society can adapt well to the changes the ever-widening use of AI brings. The course is free for anyone to attend, and close to hundred thousand participants have already signed up.

FCAI – Finnish Center for Artificial Intelligence

The Finnish AI strategy emphasises the necessity for a Center of Excellence for AI and applied basic research that will attract top-level international expertise. The recently established Finnish Center for Artificial Intelligence (FCAI) [FI 4] aims to respond to such a need by becoming a nationwidecross-disciplinary competence centre for AI, initiated by Aalto University, University of Helsinki, and VTT Technical Research Centre of Finland. The mission is to create “Real AI for Real People in the Real World”, a type of AI which operates in collaboration with humans in various everyday domains. As part of the Center, the FCAI Society [FI 5], a group which consists of experts from philosophy, ethics, sociology, legal studies, psychology and art, will explore the impact AI has in all aspects of our lives. Both FCAI Society and FCAI researchers are committed to engaging in public dialogue when considering the wider implications of AI research.

Request of clarification about using automated tax assessments

In September 2018, the Deputy Parliamentary Ombudsman, Maija Sakslin, requested clarification from the Tax Administration about using automated tax assessments. [FI 6] In 2017, the Deputy Ombudsman had made two complaints regarding mistakes made by automated processes. In her position paper, the ombudsman writes that in two of her complaint settlements, the Tax Administration reported that taxation procedures of self-assessed taxes are mostly automated. Although the mistakes made by automated processes had been fixed, the ombudsman is concerned about how taxpayers’ legal protection, good administration and accountability of public servants are secured in automated taxation decisions. [FI 7] The ombudsman states that it is problematic that the requests for information letters and taxation decision letters sent by the automated taxation systems do not include any specific contact information, only general service numbers of the Tax Administration. Moreover, the automated taxation process produces no justification concerning the decisions it makes. If there are problems with the decision, the issue is handled at a call centre by a public servant who has not taken part in the decision-making and has no detailed information about the decision. Based on the two complaints, the ombudsman stated that, in terms of automated taxation, taxpayers’ legal rights to receive an accurate service and justification of the taxation decisions are currently unclear. The Deputy-Ombudsman has requested the Ministry of Finance to obtain the reports needed from the Tax Administration and the Ministry to produce their own report regarding the legality of automated taxation. The report was due in November 2018.

Report on ethical information policy in an age of Artificial Intelligence

In March 2018, the Ministry of Finance set up a group to prepare a report on ethical questions concerning information policy and AI. A draft of the report was made publicly available and open to comment until the end of October and is currently being finalised. The report discusses policies regarding individuals' rights to their own data (MyData), openness of AI solutions, and the prevention of adverse effects stemming from automated decision-making on support systems and society. The policy needs are described on a very general level, and any possible regulatory or legislative measures resulting from the report will take place under the next government. [FI 8]

Political debates on aspects of automation – Civil Society and Academia

MyData Global

In October 2018, an international association for the ethical use of personal data, MyData Global [FI 9], was founded to consolidate and promote the MyData initiative that started in Finland some years ago. The association’s purpose is to empower individuals by improving their right to self-determination regarding their personal data. The human-centric paradigm is aimed at a fair, sustainable, and prosperous digital society, where the sharing of personal data is based on trust as well as a balanced and fair relationship between individuals and organisations. The founding of this association formalises a network of experts and stakeholders, working on issues around personal data uses, and they have been gathering in Helsinki annually since August 2016. As machine learning and AI-based systems rely on personal data generated by and about individuals, the ethics of these technologies have been at the forefront in the meetings of the MyData community. Much like personal data itself, the potential of these technologies for individual as well as collective and societal good is recognised as being enormous. At the same time, considerations of privacy and the rights of individuals and collectives to know what data about them is being used, and for what purposes, as well as to opt out selectively or wholesale, must be taken seriously, argue the proponents of the MyData model. [FI 10] At the core of MyData is an attempt to bring experts together to find arenas of appropriate intervention in terms of building an ethically more robust society.


Ethical Challenge for Enterprises

In terms of the ethics, Finland’s AI Programme challenges enterprises to share their understanding and use of ethical principles with the aim of making Finland a model country for the ethical uses of AI. Questions that this challenge [FI 11] is addressing include the following: Is the data used to train AI biased or discriminating? Who is responsible for the decisions made by AI? This work started recently—the kick-off event for companies was held in October 2018. The aim is to promote self-regulation in companies and formulate principles that define how AI could be used in fair, more transparent and trust-building ways. Leading Finnish enterprises that were the first to join the ethics challenge include the K Group, OP Group and Stora Enso and currently more than fifty companies are participating in the challenge. [FI 12]

Oversight Mechanisms

Non-Discrimination and Equality Tribunal

In April 2018, the National Non-Discrimination and Equality Tribunal prohibited a financial company, specialising in credits, the use of certain statistical methods in credit scoring decisions. The Non-Discrimination Ombudsman had requested the tribunal to investigate whether the credit institution company Svea Ekonomi AB was guilty of discrimination in a case that occurred in July 2015. The case considered whether the company did not grant a loan to a person in connection to the purchase of building materials online. [FI 13] Having received a rejection to the loan application, the credit applicant, a Finnish-speaking man in his thirties from a sparsely populated rural area of Finland, asked the company to justify the negative decision. The company first responded by saying that their decision required no justification, and then that the decision had been based on a credit rating made by credit scoring service using statistical methods. Such services do not take the creditworthiness of individual credit applicants into account and therefore, the assessments made may significantly differ from the profile of the individual credit applicant. This, the credit company agreed, may seem unfair to a credit applicant. The credit applicant petitioned the Non-Discrimination Ombudsman who then investigated the case for over a year. The credit rejection in question was based on data obtained from the internal records of Svea Ekonomi, the credit company, information from the credit data file, and the score from the scoring system from an external service provider. Since the applicant had no prior payment deficits in the internal records of the credit company, nor in the credit data file, the scoring system gave him a score based on factors such as his place of residence, gender, age and mother tongue. The company did not investigate the applicant’s income or financial situation, and neither was this information required on the credit application. As men have more payment failures than women, men are awarded fewer points in the scoring system than women and similarly, those with Finnish as their first language receive fewer points than Swedish-speaking Finns. Had the applicant been a woman, or Swedish-speaking, he would have met the company criteria for the loan. After failing to reconcile the case, the Non-Discrimination Ombudsman brought the case to a tribunal. The ombudsman decided that the credit company was guilty of discrimination based on the Non-Discrimination Act. The applicant’s age, male gender, Finnish as the mother tongue and the place of residence in a rural area were all factors that contributed to a case of multiple discriminations, resulting in a decision not to grant a loan. Discrimination based on such factors is prohibited in section 8 of the Non-Discrimination Act and in the section 8 of the Act on Equality between Women and Men. The tribunal noted that it was remarkable that the applicant would have been granted the loan if he were a woman or spoke Swedish as his mother tongue. The National Non-Discrimination and Equality Tribunal prohibited Svea Ekonomi from continuing their discriminatory practices and imposed a conditional fine of €100,000 to enforce the prohibitive decision.

ADM in Action

Benefit processes at the Social Insurance Institution of Finland (Kela)

Kela is responsible for settling benefits under national social security programmes. Around forty core benefits are handled by Kela, including health insurance, state pensions, unemployment benefits, disability benefits, child benefits and childcare allowances, student financial aid, housing allowances, and basic social assistance. While benefits are an important income source for underprivileged groups in particular, most Finns regardless of income level or social status receive benefits from Kela during their lifetime. Kela hands out benefits of approximately €15.5 billion annually. [FI 14] While more traditional automated information systems have been used for decision ­automation in Kela for decades, AI, machine learning and software robotics are seen as an integral part of their future ICT systems. [FI 15] Ongoing and potential AI developments include chatbots [FI 16] [FI 17] for customer service, automated benefit processing, detection (or prevention) of fraud or misunderstanding, and customer data analytics. In the case of benefit processing in general, and automated processes in particular, it is required that procedures must conform to existing legislation. From an automation perspective, the relevant phases of the benefit process include submitting the application, prerequisite checks, and the making of the decision. In the application phase, the system can provide information and advice on benefits. However, benefit legislation can allow for various combinations of benefits to be applied in a given situation and because benefits are interlinked, different combinations may produce different total benefits for the citizen. Therefore, a parameter for automation is how the system should prioritise benefits. Deciding which is the ‘best’ combination of benefits means deciding what outcome should be prioritised—the client’s total received benefits, or some other goal (political, economic or otherwise). The automation of prerequisite checks entails, first, checking whether the information provided is sufficient for decision-making, valid, and trustworthy; and second, whether other benefits affect, or are affected by the applied benefit. Once these checks have been completed, the decision can be made. In most cases decisions are based on a regulated set of rules, and machine learning can be taught using validated data Kela already has from previous decisions. A problem that Kela faces as a public organization is how to communicate the results and the reasoning behind the decision-making process to citizens. In the case of automated decision-making, decisions are essentially probabilistic in nature, and models are based on features of similar historical cases. How can decision trees be translated into text that is understandable to the customer? And how is the probabilistic accuracy of automated decision-making communicated? In addition to accuracy, one relevant parameter for the customer is response time— this can be measured in weeks when humans process applications, but can be practically instantaneous when an automated system is used. Legislative processes that produce regulation that decisions are based on complicate the automation of the benefit process. Kela’s decisions are based on more than 200 pieces of separate regulation, owned by 6 ministries, and spanning 30 years. Separate laws may be semantically incompatible which complicates their translation into code: for example, different regulations contain more than 20 interpretations of ‘income’. In addition, benefit laws change, and automated models need to behave in a new way after the new law comes into force. When new benefits are created, no decision data is available, which means machine learning needs to be taught using, for example, proxy data created specifically for this purpose. This points towards new employee roles that automated decision-making creates in an organization like Kela: in addition to data scientists who supervise and train the systems and analyse data, data validators evaluate decisions made by the systems and refine models based on evaluation, and data creators produce new data to teach the models. [FI 18]

Child welfare and psychiatry services – the use of predictive AI analytics

In an experiment undertaken by the City of Espoo, in cooperation with software and service company Tieto, AI was used to analyse anonymised health care and social care data of Espoo’s population and client data of early childhood education. The goal of the experiment, carried out in 2017 and 2018, was to screen service paths from the data by grouping together risk factors that could lead to the need for child welfare services or child and youth psychiatry services. [FI 19] Tieto and Espoo processed data from the years 2002 to 2016, consisting of 520,000 people who had used the services during that time, and 37 million client contacts. The experiment produced preliminary results about factors that could lead to the need for child welfare services or child and youth psychiatry services. For example, the AI system found approximately 280 factors that could anticipate the need for child welfare services. [FI 20] The experiment was the first of its kind in Finland. Before it, data from public services had never been combined and analysed so extensively in Finland using AI technologies. By combining data from several sources, it was possible to review public service customer paths by families, as the city’s data systems are usually individual-oriented. The City of Espoo is planning to continue experimenting with AI. The next step is to consider how to utilise AI to allocate services in a preventive manner and to identify relevant partners to cooperate with towards that aim. The technical possibilities to use an AI-based system to alert health and social care personnel to clients’ cumulative risk factors have also been tested. Yet, Espoo states that ethical guidance regarding the use of an AI system like this is needed. For example, issues of privacy and automated decision-making should be carefully considered before introducing alert systems for child welfare services. Among other things, Espoo is now discussing ethical questions related to the use of such alert systems with expert organisations such as The Finnish Center for Artificial Intelligence (FCAI). As a partner in the experiment, Tieto has gained enough knowledge to develop their own commercial AI platform for its customers in both the public and private sector which raises further ethical questions.

DigitalMinds – Assessing workers’ personality based on automated ­analysis of digital footprints

The Finnish start-up company DigitalMinds [FI 21]is building a ‘third-generation’ assessment technology for employee recruitment. Key clients (currently between 10 and 20 Finnish companies) are large corporations and private companies with high volumes of job applicants. Personality assessment technologies have been used since the 1940s in job recruitment. At first, these came in the form of paper personality tests that were filled in by prospective job candidates to assess their personality traits. Since the 1990s, such tests have been done in online environments. With their new service, DigitalMinds aims to eliminate the human participation in the process, in order to make the personality assessment process ‘faster’ and ‘more reliable’, according to the company. Since 2017 it has used public interfaces of social media (Twitter and Facebook) and email (Gmail and Microsoft Office 365) to analyse the entire corpus of an individuals’ online presence. This results in a personality assessment that a prospective employer can use to assess a prospective employee. Measures that are tracked include how active individuals are online and how they react to posts/emails. Such techniques are sometimes complemented with automated video analysis to analyse personality in verbal communication (see, for example, the HireVue software [FI 22]). The results produced are similar to the ones made by traditional assessment providers, i.e. whether a person is an introvert/extrovert, attentive to details etc. The companies which use this technology do not store any data, and they are required by Finnish law to ask the prospective job candidates for informed consent to gain access to their social media profiles and email accounts in order to perform the personality test. The candidates use a Google or Microsoft login to access the DigitalMinds platform and then they input their credentials themselves, avoiding direct disclosure of these credentials to the company. According to DigitalMinds, there have been no objections to such analysis so far from prospective candidates. A crucial ethical issue that must be considered, however, is around the actual degree of choice that a candidate has to decline access to her personal and former/current work email accounts, as well as to social media profiles in online accounts—if a candidate is in a vulnerable position and needs a job, they might be reluctant to decline such an assessment of data that may be very personal, or include professional/trade secrets within emails. In addition, the varied amount of data that is available online about individuals makes comparison between candidates difficult. According to the company, the trustworthiness of the data is not a big issue, if there is a sufficient corpus available online. The company also waives responsibility by suggesting that they do not make actual decisions, but that they automate the process of assessment based on which decisions are made. An important issue that this case raises is the degree to which individuals’ online digital/information in social media and emails should be considered trustworthy. It potentially harms disadvantaged groups who may have reasons to have concealed or fake online personalities.

Digital symptom checkers

A strategic project of the current governmental programme for “self-care and digital value services”, ODA, has developed the OmaOlo (“MyFeel” in English) service which includes digital symptom checkers. [FI 23] Such checkers are now available for lower back pain, respiratory infection symptoms and urinary tract infections and there will be many more in the future. [FI 24] The symptom checkers are based on The Finnish Medical Society Duodecim’s medical database and evidence-based clinical practices’ Current Care Guidelines. In practice, the symptom checker asks simple yes-or-no-questions about patients’ symptoms and then it offers advice about whether the condition needs medical attention or if self-care is sufficient. The checkers also include questions about acute symptoms that need immediate care. If the patient answers “yes” to any acute symptoms, the system recommends that they contact emergency services as soon as possible. The OmaOlo service has a disclaimer emphasizing that the aim of the symptom checker is not to produce a diagnosis but to make an evaluation of care needs. This means that it is possible that the assessments made by the symptom checkers are not always accurate, or that they are interpreted as more precise than they actually are. The questionnaires and the algorithms used to analyse them are based on research, when such research is available, or on collective medical care experience. The symptom checkers have been tested in several areas of Finland. In testing, OmaOlo tried to collect as much feedback of the services as possible for further development and validation. Preliminary observations suggest that compared to the assessments made by health care personnel, more patients than before are referred to medical care by symptom checkers. The checkers, and other OmaOlo services, will be introduced more generally to the public in 2019. A new state-owned company, SoteDigi, aims to produce and develop digital health and social care services and will be in charge of the development and the distribution of the OmaOlo services.

Utopia Analytics – Automated Content Moderation on Social Media and e-commerce websites

Utopia Analytics [FI 25] is a Finnish startup that specialises in automating content moderation on social media, public discussion forums, and e-commerce websites. The company brands itself as working with the aim to “bring democracy back into social media” by keeping ”your audiences talking, your online communities safe and your brand authentic”. The service that they offer is a self-learning AI (neural network) which analyses the corpus of data on a given website, and then starts moderating content. At first, moderation is done alongside human moderators who test and ‘train’ the decisions of the system. After some time, human moderators take the role of supervisors of the algorithm, and only control its decisions in controversial or less straightforward cases. The service has been used on the Swiss peer-to-peer sales service site where it moderates sales advertisements for inappropriate or inaccurate content in several languages, as well as on the largest Finnish online forum, Suomi24. Regarding moderation on Suomi24, the company reports a quantitative increase in the number of moderated posts—from 8,000 to 28,000—and a related increase in advertisements on the website. It is not clear what kind of content is being moderated in each case and how. It seems that such decisions are contextual and can be set up within a framework of each platform that implements the algorithm.

Minna Ruckenstein

Minna Ruckenstein works as an associate professor at the Consumer Society Research Centre and the Helsinki Center for Digital Humanities, University of Helsinki. Ruckenstein has studied human-technology involvements from various perspectives, exploring how people use direct-to-consumer genetic testing, engage with self-tracking technologies and understand algorithmic systems as part of daily lives. The disciplinary underpinnings of her work range from anthropology, science and technology studies to consumer economics. She has published widely in top-quality international journals, including Big Data & Society and Information, Communication and Society. Prior to her academic work, Ruckenstein worked as a journalist and an independent consultant, and the professional experience has shaped the way she works, in a participatory mode, in interdisciplinary groups and with stakeholders involved. Most recent collaborative projects explore social imaginaries of data activism.

Julia Velkova

Julia Velkova is a digital media scholar and post-doctoral researcher at the Consumer Society Research Centre at the University of Helsinki. Her research lies at the crossing between infrastructure studies, science and technology studies and cultural studies of new and digital media. She currently works on a project which explores the waste economies behind the production of ‘the cloud’ with focus on the residual heat, temporalities and spaces that are created in the process of data centres being established in the Nordic countries. Other themes that she currently works with include algorithmic and metric cultures. She is also the Vice-Chair of the section “Media Industries and Cultural Production” of the European Communication and Research Education Association (ECREA). Her work has been published in journals such as New Media & Society, Big Data & Society, and International Journal of Cultural Studies, among others.