The business ethics of artificial intelligence
Companies need to know the impact that new technologies such as AI might have on their business, says Guendalina Dondé
Artificial intelligence (AI) was the focus of the latest in the series of World Built Environment Forum webinars.
Steph Fairbairn, Construction Journal Editor, RICS
17 April 2019
Artificial intelligence (AI) was the focus of the latest in the series of World Built Environment Forum webinars. Current and future use of AI in the built environment and skills for professionals were among the topics tackled by the RICS-assembled panel of experts.
From grammar checkers, to satellite navigation systems, in the modern world we are using AI without necessarily realising it. However, whilst AI is typically quite widespread on a general level, the built environment is just beginning to take advantage of its uses.
The Gartner Hype Cycle represents the maturity, adoption and social application of specific technologies. At present, AI in the built environment is at the second stage of the cycle, sitting at the peak of inflated expectations. This is the point where there is a lot of publicity, often accompanied by hopes, scepticism and companies making judgements about whether or not to take action.
Some have taken action, and as a result of this AI is being used in pockets in the built environment. For example, geotechnical engineering can be used to analyse soil properties, and throw away sensors and algorithms can be used to help more accurately predict concrete curing times. AI is also being used to create smarter infrastructure solutions, and for marketing, sales and customer service, where it can turn costs into revenue generators.
There is, however, scope for so much more. At the height of its potential, AI can help give us insight and analysis, helping us to optimise, monitor and predict the built environment to the benefit of its inhabitants.
Finding a balance between optimism and fear is key. The optimism which exists around AI is valid but must be regulated. We must be pragmatic in order to avoid risks such as over trusting results and becoming over reliant on AI. This will require due diligence, human monitoring and intervention, therefore the fear that AI will make humans redundant is misplaced. AI is unlikely to displace jobs, but will displace low-level task, making jobs more interesting and moving human contribution higher up the value chain. The audience seemed to echo this view: they were asked in a poll whether they thought artificial intelligence would put their jobs at risk. The most popular response was 'disagree', with 37% of respondents voting for this and 7% voting 'strongly disagree'.
Ultimately, it's not about fearing AI, it's about ensuring that we are in control of it. The policy, legal aspects and frameworks must be taken seriously in order to ensure the use of AI is properly regulated and therefore we gain the most benefits from it. By working to remove our fear of AI, we can address the fact that we are often slow to catch up and truly take advantage of the technology at our disposal.
Experts at this year's MIPIM had the same view and noted that when we do take advantage of AI, change will occur quickly suggesting that technology in the profession has huge potential but it's not being used effectively yet. This will change soon. When adoption of technology accelerates, it can happen almost overnight. You just have to look at SMS vs WhatsApp.
A second audience poll asked whether respondents thought artificial intelligence would fundamentally change the skills they would need in the future, with a majority of 48% agreeing with this statement, and 22% strongly agreeing.
In order to ensure they are best placed to adapt to the changing profession, professionals need to understand the ways in which their job will be impacted and get ahead of the curve. This requires a journey of life-long learning. It's not about trying to outsmart AI but learning how you can work alongside it.
In the interim period between the Hype Cycle defined 'peak of inflated expectations' and 'plateau of productivity', understanding the basics is key. AI will be used most effectively if we get the basics right, understanding what data is available and what data is really needed for AI systems.
As we prepare for the future, the work you do with AI, and understanding you need of it will largely depend on your role within the built environment. You need to ask yourself what you currently do, how your tasks will change, and what tools will be open to you in the future.
For example, many professionals today work with Excel, understanding a small amount of its total functions, and then hiring experts or undertaking specific training when necessary. It will be same for AI: understanding the ins and outs of the functions of AI packages won't necessarily be required of all professionals. Instead, professionals should understand its capabilities and how they can be used, and then rely on those with experience will to train the software, adapting it to the needs of their particular company, before teaching colleagues what they need to know.
And it's not just about those currently working. According to the World Economic Forum's 'Future of Jobs' report, 65% of children entering primary school today will ultimately end up working in completely new job types that don't yet exist. So, while AI may remove some jobs, it will undoubtedly create a lot more. It's not about loss, it's about change.
As artificial intelligence becomes increasingly capable of mirroring human cognitive ability, what impact will this have on professions? Will it lead to job losses or require new skill sets and could the use of AI for routine cognitive tasks restrict training opportunities for newcomers?
Vital to understanding this change is appreciating the capabilities and limits of AI, and how they compare to those of humans. Unless data is assigned a numerical value, AI can not understand the quality of the data. While machine learning needs thousands to millions of data points to learn, a human can learn from one example. And while AI is very good at working with data, humans are good at spotting what data is missing.
It's important to remember that the list of things that AI cannot do as well as humans is much longer than the things that can be done. The list includes a number of soft skills, such as higher order abstract reasoning, emotional inference, contextual inference, critical thinking, and people management.
Transparency and codes of conduct are also key concerns around the use of AI, for example ensuring the programming of the AI is free from bias. Membership and trade bodies, businesses and academics must play their part in ensuring that the ethics of the profession are not compromised due to the use of AI.
At the peak of inflated expectations, there is still much to be ascertained. As we move towards truly harnessing the power of AI, in the built environment but also on a more general level, we must focus on the benefits and remain mindful of the risks. Webinar panellist, Alan Mosca, CTO and Co-Founder, nPlan said: "The bigger benefit of AI is enabling humans to do things they couldn't do before."
Combining this with other technologies such as the internet of things could mean that the whole world as we know it could be positively disrupted. The power is in our hands.