Background  

The professional standard should be read in light of its aim to secure the public advantage of surveying and ensure the future security of the profession – by placing, at the core of its requirements, the importance of the skill and experience of the professional surveyor, alongside a need to guard against complacency as to how this technology is involved in decision-making when providing professional services. 

Scope 

The standard only applies to the use of AI that will have a material impact on the delivery of surveying services. Whether an AI system will have a material impact on the delivery of surveying services will depend on the facts and circumstances of its use. A record must be made of AI systems considered to have potential to materially impact the delivery of services and why. 

A written record is to be made of conflicts between the standard and legislation. 

Baseline knowledge  

Develop and maintain sufficient and appropriate baseline knowledge of: 

  • the different types and sub-sets of AI system and their basic ways of working, limitations and failure modes 
  • the risk of AI systems producing erroneous output  
  • the inherent risk of bias in AI systems and 
  • data usage and data risks relevant to the use of AI systems. 

Applies to RICS members 

Governance 

Some AI systems require data to be uploaded to enable functionality, therefore firms must make sure private and confidential data is safeguarded in ways that include those set out in the standard. 

Assess, in writing, whether AI is the most appropriate tool for use. This could be in the form of a periodically reviewed policy or standing statement, but must be developed after consideration of the matters set out in the standard.  

Maintain a written register of certain information set out in the standard in order to support the appropriate tool assessment and the management of inherent and consequent risks in the use of AI.   

Develop and implement policies – whether standalone or additional to existing IT usage policies – regarding procurement and use of AI systems that, at least,  

  • detail and clarify the roles, responsibilities and liabilities of all employees in the regulated firm involved in the procurement and/or use of AI 
  • detail the regular (at least annual), relevant training expectations for staff in the regulated firm involved in the procurement and/or use of AI 
  • state how human control and judgement is to interact with the use of AI systems, such as through regular monitoring or dip-samples of outputs for quality assurance purposes and  
  • provide guidance to staff on how to identify and mitigate risks involved in the use of AI systems. 

Applies to RICS-regulated firms 

Risk management 

Actively document risks in a risk register that: 

  • documents overarching risks associated with the use of AI, including: 
  • inherent bias in AI systems and their outputs  
  • erroneous outputs from AI systems 
  • limitations to the quantity and quality of information available regarding the AI and 
  • system and its underlying training data, and 
  • retention and/or use of data inputted by the firm into the AI system. 
  • that for each risk includes: 
  • a description of the risk 
  • the likelihood of the risk materialising and its likely impact 
  • the plan to mitigate and manage the risk 
  • the risk appetite of the firm 
  • regular updates to the status and progress of risk management, and 
  • a categorisation of the risk according to a red, amber, green (RAG) rating, or similar method. 

Applies to RICS-regulated firms 

 

Procurement and due diligence 

Carry out detailed due diligence before procuring an AI system from a third party, in order to obtain information that enables clearly justified decisions regarding procurement. Such detailed due diligence must involve the steps and the minimum information requests set out in the standard.  

Where an AI system supplier provides no or only limited information, the risks associated with any missing information must be identified and recorded in the risk register. 

Applies to RICS-regulated firms 

 

Outputs, reliance and assurance 

Apply professional judgement – comprising knowledge, skills, reasoning, experience and professional scepticism (as defined) – to make a written decision about the reliability of an AI system output that will have a material impact on the delivery of surveying services. That written decision must detail the matters set out in the standard. 

High volume outputs are to be assessed for reliability via randomised dip samples. 

Applies to both RICS members and RICS-regulated firms 

Client communication and Explainability  

In order to maintain the trust and confidence of clients, documentation governing the client relationship must include the information required by the standard in advance of using AI systems to deliver surveying services. 

Additionally, if requested by clients, certain information set out in the standard (concerning the AI system and its risks and reliability of output, and the due diligence carried out) must be provided. 

Parts apply to RICS members and/or RICS-regulated firms