Artificial intelligence (AI) is revolutionising the natural and built environment sectors by facilitating efficiency and sustainability developments in planning, design, construction and maintenance activity.

By integrating AI into their practice, members and RICS-regulated firms can make data-driven decisions. However, the responsible use of AI is important for the benefit of all stakeholders, including the public and the environment. As AI deepens its transformation of surveying practice, establishing clear, reliable standards is more critical than ever.

Responsible use of AI - a new RICS professional standard

AI is increasingly being involved – both knowingly and unknowingly - in producing professional work in surveying. Since AI can be used to substitute human decision-making, it is important to have safeguards in place to ensure members and RICS-regulated firms maintain control of their professional work. This ensures quality for clients and upholds high standards in the public interest.

The standard aims to:

  • assist RICS members and regulated firms in establishing guardrails to maintain professional judgement while adapting to the use of this new technology
  • build confidence in data management
  • ensure effective communication with clients.

By doing so, the standard seeks to support trust among stakeholders and ensure AI enhances safety, resilience, and innovation in the natural and built environment.

Key topics

Designed to be incorporated into existing work processes and be globally applicable, the standard addresses:

  • knowledge requirements for using AI in surveying
  • practice management, including data governance, system governance, and risk management
  • AI procurement and due diligence processes
  • output reliability and assurance protocols
  • client communication and transparency requirements
  • AI system development guidance for firms creating their own AI solutions.

Expert Working Group biographies

Sophia Adams-Bhatti
Co-chair

Sophia Adams-Bhatti

 Sophia Adams Bhatti is a public policy and regulation expert with over 25 years’ experience across a number of sectors including the law, financial services, competition, consumer affairs, healthcare, immigration and asylum. Most recently she was the Global Head of Purpose and Impact at Simmons & Simmons LLP. Previously, she was the Director of Policy at the Law Society of England and Wales, where she led, amongst other things, the Commission on the use of AI in the Criminal Justice sector. Sophia is a non-executive Director at Amnesty International UK and a Board member of Lawtech UK. She is an ethics expert who regularly advises and speaks on associated issues. 

Christopher De Gruben
Co-chair

Christopher De Gruben

Director - Head of Property, Artefact

Christopher de Gruben is an RICS Chartered Valuer with over 20 years of global urban sector experience. His particular passion is around the emergent use of data and AI in the property sector. Chris has started his career working for an international property developer across Asia, focused on its Mongolian operations. His primary tasks revolved around designing, developing and selling high-end real estate to foreign investors and corporates. This interest in emerging markets led Chris to eventually co-found his own firm dedicated to research on the property investment landscape in Mongolia and the region. This quickly grew into becoming a primary investor in the city centre of Ulaanbaatar but also contributing and leading urban development projects for the World Bank, EBRD and Asian Development Bank. During this work, Chris became fascinated with the potential uses of data to improve the operations and the understanding of our urban environments. Today, Chris is the UK Head of Property for a global data-focused management consultancy (Artefact) which combines advanced data science with transformation led management consulting. Chris and his team build bespoke AI models for some of the UK’s FTSE250 property companies. Models he has built can accurately predict market movements, define the optimal use of a residential site, carry out dynamic pricing for lease renewals for BTR operators or suggest an ideal mix of tenant types in retail centres. Above all, Chris maintains a strong belief that accurate and timely property valuations are the bedrock of our financial systems and contribute to a stable economy where investment decisions are made on sound basis. To do this Chris believes that we must better use the wide variety of data tools and datasets available to valeurs to paint a more accurate and contextual picture of the market and highlight significant uncertainties. Chris holds an MSc in Sustainable Urban Development from Oxford University, an MSc in Real Estate from the University of Reading, an EMBA from INSEAD business school and finally, a PGDip from Said Business School in Global Business. He also continues to develop his urban planning skills as an AssocRTPI. Within the RICS, Chris has sat on the North Asia Valuation Board, is an APC assessor and has supported a number of working groups. Today Chris is a PropTech member and a member of the Valuation Professional Group Panel and continues to promote the use of data and AI within the valuation profession.

James Garner FRICS

James Garner FRICS

Senior Director at Gleeds | Global Head of Data, Insights & Analytics | Data IQ 100 2023 | BSc (Hons), FRICS, RITTECH, AMBCS

Chair of Project Data Analytics Taskforce (https://www.projectdataanalytics.co.uk/) James, an accomplished data leader in the construction industry, began his journey as a Quantity Surveyor in 2000. After earning a first-class honours degree, he wrote his university dissertation on data and digitalisation in the construction sector. James quickly built a strong portfolio, becoming a member of the Royal Institution of Chartered Surveyors (RICS) in 2002. In 2012, he was elected as a Fellow of RICS for his significant writing contributions to the RICS Black Book technical standards. Throughout his career, James has worked on prestigious projects, including buildings for Imperial College and various Oxford University Colleges. Whilst employed at Gleeds as a Quantity Surveyor, he became head of the Education sector for London. In 2020, James took on a new challenge as Head of Insights and Analytics at Gleeds. This opportunity allowed him to further his skillset and qualify as a data analyst in 2022, launching the company's data analytics and research capabilities. Today, James leads Gleeds' Global Data, Insights, and Analytics department. With a focus on business intelligence, project intelligence, data literacy, and data maturity, he utilises his expertise in both Quantity Surveying and data analytics to drive the industry's digital transformation. As a Fellow of the RICS, a qualified Data Analyst, and an influential data leader, James recognises the crucial role data and analytics play in informing strategies and decision-making processes within the profession. Recently, James has been elected as chair of the Project Data Analytics Taskforce, further solidifying his position as a leading figure in the construction industry's digital transformation. James was nominated as one of the Top 100 People in Data 2023 by Data IQ, showcasing his dedication and impact in the field of Project Data Analytics James also publishes a weekly newsletter and podcast called ProjectFlux to help people keep up to date https://linktr.ee/projectflux

Matthew Lavy KC

Matthew Lavy KC

Barrister, 4 Pump Court

Matthew is a commercial barrister with a particular focus on technology and telecoms-related disputes. Matthew acts and advises in relation to the full range of traditional technology disputes, such as project failures and delays, IP licensing, copyright infringement, breach of confidence, outsourcing agreements, data protection, digital media, e/m-commerce and digital rights management; he also has extensive expertise in relation to disputes arising out of cyber liability and data, blockchain technologies and AI. Matthew was one of the authors of the Legal Statement on Cryptoassets and Smart Contracts issued by the Jurisdiction Taskforce of the UK’s Lawtech Delivery Panel. He is the author of two books on systems administration and is co-editor of and contributor to The Law of Artificial Intelligence (Sweet & Maxwell, 2020). He is on the Nominet DRS panel of experts and is a Trustee of the Society of Computers and Law. Prior to coming to the bar, Matthew worked as a software developer, system administrator and technical writer.

Poppy Martin MRICS

Poppy Martin MRICS

Associate Director of Rural, Energy and Projects, Savills

Poppy is a chartered surveyor in the south west, working across the south of England and south Wales; specialising in rural estate management. Poppy studied at the Royal Agricultural College before qualifying as a surveyor with Savills. She has a broad range of experience in estate management related work including property management, leasing and letting, landlord and tenant, sales of land and property and providing strategic advice to clients. Poppy is a former winner of the RICS Young Surveyor Award - Land (Urban & Rural) Category, former committee member of Bristol Matrics and judge of the RICS Surveyor of the Year Awards. She looks forward to her involvement with the Land and Natural Resources PGP and is particularly interested in developing the Next Generation and looking at how we can implement technology into our roles.

Nella Pang

Nella Pang

Managing Director, Omega RE

Nella Pang, the Managing Director of Omega RE, is a trailblazer in the property industry, continually pushing the boundaries of real estate through innovation and technology. As the founder of Omega RE in 2020, Nella's leadership has been pivotal in achieving significant milestones, including becoming the first B Corp certified commercial real estate advisory firm outside of London in 2023. This certification highlights Omega RE's dedication to social and environmental responsibility. Nella's innovative spirit is evident in her approach to integrating PropTech into clients' real estate portfolios, enabling them to leverage key performance indicators for measurable success. Her client-centric philosophy has led to long-term partnerships and remarkable growth, such as advising a core client in expanding from 5 to 34 sites across the UK and increasing their business valuation sixfold from 2022 to 2024. Under her guidance, Omega RE fosters a culture of inclusivity and forward-thinking, demonstrated by initiatives like the Strong Female Lead Initiative and a commitment to gender diversity in recruitment. Nella also hosts the Let’s Talk South Coast Podcast, promoting industry discourse and attracting investment. Nella's innovative projects, such as the UK's largest clean air mural in Southampton, exemplify her commitment to environmental consciousness and unconventional solutions. Her influence extends through leadership roles on esteemed boards, including the Southampton Chamber of Commerce and the RICS Commercial Property Panel, where she contributes to shaping industry practices globally. Recognised with accolades like the Best Newcomer of the Year and MD of Omega RE Leader of the Year, Nella remains a visionary leader dedicated to driving positive change. Her ongoing efforts to mentor the next generation and support disadvantaged children in STEM underscore her commitment to education and inclusivity. Nella Pang stands as a pioneering force, continually staying ahead of the curve in real estate, leveraging technology, and fostering innovation to benefit her clients and the industry at large.

Craig Ross MRICS

Craig Ross MRICS

Associate Director, Safety and Security at Diriyah Company, Saudi Arabia

Craig Ross is a Scottish Chartered Building Surveyor with 27+ years of experience. He began his career in 1996, gaining training and studying Architectural Technology at UHI Perth. He later earned a 1st Class Honours degree in Building Surveying at Napier in Edinburgh, while focusing on building surveys, new build design, and heritage projects. Later serving in the UK Armed Forces, he integrated building surveying with security and counterterrorism, earning a distinction in a Master's Degree at St. Andrews. Currently pursuing a PhD at CSTPV in St Andrews, Craig researches terrorism threats to heritage and implements security designs in the Middle East. Also a Chartered Building Engineer and member of engineering councils in the UAE and KSA, Craig is Associate Director, Safety and Security at Diriyah Company. Engaging in various RICS initiatives, he has served as APC Chair/Assessor, local Matrics Chair in Scotland and the UAE, and worked in a staff role as Associate Director for Built Environment for 2 years. His work at RICS involved collaborating with the UK Government, developing Guidance Notes, and contributing to fire safety programs. Understanding the threats and opportunities of AI and utilising AI to enhance the profession is an area of particular interest for Craig, and developing the RICS AI Standard is a significant step in this process.

Malcolm Webb

Malcolm Webb

Risk Director, Legal & General Surveying Services

Malcolm started work at 18 as a trainee Building Surveyor and has spent all his working life in surveying. During that time, he had great opportunities and experience by being flexible and exploring available options. He trained as a Building Surveyor by “day release” and achieved MRICS in 1994. He has been lucky to work in different fields including Project Management, Social Housing and the Valuation and Survey industry. Malcolm’s main focus since leaving project management has been residential property. He has experience of working in small practices, large corporates, Social Housing, as well as Lenders. This gives him a varied pool of knowledge and experience to draw upon. As Malcolm became more experienced, he liked to provide support to colleagues or mentor people in surveying. He has previously contributed to isurv and last year achieved the Mentoring Top Talent at The Institute of Leadership Management. The focus in recent years, since joining Legal & General, has been risk management for lenders and surveyors in areas including Quality, Property Risk, Lender Guidance, Professional Standards and PII. He has also previously contributed to industry groups including the CML (UKF), BSA, RSVG, HBF and other RICS Working Groups with Flooding and Home Survey Standards.

AI system use in surveying

Uses of AI systems are many and varied, with some being used for everyday, mundane, administrative tasks, and others being used for significant analytical tasks or content creation. Within surveying, AI may be used in a way which is incidental or unrelated to the actual delivery of surveying services – e.g. for back-office administration tasks such as managing room bookings. However, not all back-office, business organisation uses of AI will be incidental - some may have a material impact on the delivery of surveying services, such as the use of AI-powered chatbots to provide certain levels of customer service, or the use of co-pilots to write emails to clients.

Deciding whether or not the use of an AI system will have a material impact on the delivery of surveying services will often be a question of judgement, with grey areas becoming clearer over time as use of these systems develops. Members and regulated firms should apply their informed professional judgement to the question of material impact in any given instance. Ultimately, it is for the Regulatory Tribunal to determine whether any given use of AI had a material impact on service delivery, and is therefore subject to the requirements of the standard.

Across the built and natural environment, there are many emerging AI technologies that may have a material impact on the way professionals conduct their work, deliver services, and uphold industry standards. Below are case studies of particular uses of AI systems, within practices sectors and generally, that are likely to have a material impact on the delivery of surveying services. 

Common features of AI solutions

Note that, although AI systems are often used in a bespoke way – and will continue to be developed to offer personalised solutions – there are, at present, certain features of AI that are common to all uses. One of these features is that most tools in this space are third-party proprietary solutions rather than in-house builds. This means that many AI-enabled tools provide limited detail about which specific functions of the solution actually use AI — vendors often refer to systems as “AI-powered” without breaking down how or where AI is applied. In most cases, AI is embedded within a broader tech system and forms only a small part of the process – for example, a BIM tool might include an AI feature for automated clash detection, but the rest of the system is conventional software. 

The upshot of this leads to the common feature that third-party tools tend to offer limited technical detail, with information generally restricted to marketing material, use cases and high-level documentation. This makes it difficult to understand how the AI system operates or produces outputs – a lack of transparency that makes it hard to fully trust outputs or explain them to clients.

Another common feature that arises out of this situation is that alternatives to AI are difficult to define, as AI often supports just one function within a larger system. That said, technologies like drones, IoT, and BIM can deliver similar value — such as automation or data-driven insight — either independently or in combination with AI. It is worth bearing in mind that AI’s value is often overstated when the same outcomes might be achieved through alternative, simpler methods.

A further important common feature is that the procurement process for AI is broadly the same as for any other technology solution: identifying the business need, trialling the tool and assessing its value—but AI systems typically require more data, particularly when the model needs to be trained or adapted to make decisions based on your organisation’s specific context.

 

Frequently Asked Questions

The standard will come into effect on the 9th March 2026.

AI is an evolving landscape, therefore the technology and the terminology is subject to ongoing changes that will have to be reflected in the standard through regular review. The standard sets requirements without technical detail, with practical implementation of the standard by members and firms to be assisted through RICS training and supporting materials, which can be updated more regularly.

This sort of use is unlikely to have a material impact on the delivery of surveying services, although it is not impossible and members should use their professional judgement. The professional standard applies only to use of AI systems that have a material impact on the delivery of surveying services because use of AI in that context is generally high-risk and the standard is aimed at high-risk use.

Only tools incorporating AI that will have a material impact on the delivery of surveying services need, under this standard, to be assessed for appropriateness. 

Requirements to keep a record do not mean printing off everything and keeping lever arch files of paper – it is a matter of judgement of the member or firm as to how a record is kept and in what form. The record needs to be sufficient to demonstrate that the member or firm has met the requirements of the standard.

RICS is not asking members or firms to be experts in computer science or artificial intelligence. As AI becomes more regularly used within surveying and embedded in tools that surveyors use,  all members need to have an awareness of some of the basics of AI and the risks of its use in surveying.

This only applies to outputs which have a material impact (as above).

The principal aim of this requirement is the creation of a culture of informed governance, underscoring a professional responsibility and encouraging effective oversight of AI use. An appropriately qualified surveyor has the knowledge, skills and experience to make properly informed decisions about output reliability, thereby more effectively mitigating the risks carried by the use of AI system outputs. The need for the surveyor to be named is connected to accountability and also explainability – having a named individual who can explain the written decision if necessary.

It is also worth noting that, particularly when considering high volume outputs, regulated firms are still accountable for every output, even if there isn’t scrutiny of each one.

Terms of Engagement are used to manage the relationship with your client – managing expectations and explaining what services are going to be delivered, when, how and in exchange for what fee. Your use of AI in the delivery of surveying services is an important consideration and transparency with your clients about its use is sensible so that the client properly understands what services are being delivered and how. Transparency up front in Terms of Engagement can often avoid unnecessary misunderstandings or event disputes further down the line.

This standard seeks to set reasonable requirements regarding the information to be included in Terms of Engagement – avoiding the need for lengthy explanations or detailed disclosures about the use of AI, but ensuring transparency with clients. This standard also does not operate in a vacuum; other requirements and guidance exist globally which also require transparency and explainability. Useful guidance in the UK comes from The ICO and The Alan Turing Institute: Explaining decisions made with AI | ICO and how-to-use-ai-and-personal-data.pdf

It is acknowledged that information about the interaction between AI systems and the environment is sparse and complicated. It is also acknowledged that AI providers will vary in their willingness and ability to provide such information. Nevertheless, use of AI systems does not come without environmental impacts, so it will be wise for businesses to seek to take environmental considerations into account, particularly those who are active in minimising their emissions impacts.

The standard requires firms to seek certain information from AI providers before procuring or using an AI system. The information to be sought is relevant to decision-making about the appropriateness of the AI system for the particular task intended and about the likely risks of any output from that AI system. The standard, however, acknowledges that AI providers may be unwilling or unable to provide all of the information sought. In such circumstances firms are required to note the risks arising from not having the information. Firms may choose to use publicly available information about the AI system to help in their risk identification and decision-making. They may also, in some cases, wish to seek legal and/or data protection expert advice.

Using AI can increase cybersecurity risks. However, AI can also be used to help defend against such risks.

AI can generally increase cybersecurity risks in two ways:

  • First, AI is very good at hacking – it is good at pattern recognition and mimicry, which are both important tools for hacking technology systems including payment systems.
  • Secondly, use of AI can, in some circumstances, create weaknesses in systems which can be exploited by hackers to gain unauthorised access and/or manipulate data.

Members and firms should be aware of these risks, as well as the risks of fake AI tools such as those mimicking common tools such as ChatGPT.

Sectors such as real estate seem to be emerging as key targets for such cybersecurity attacks, likely due to the documentation and sensitive information and data held by real estate firms e.g. financial information and identity documents. Further information on data handling in a surveying context can be found on RICS’ Data handling page.

IP risks are commercial risks and therefore not covered by the standard. Firms using AI should be aware of the associated IP risks, particularly when they are uploading data and information to AI systems.

AGI refers to Artificial General Intelligence. There are various definitions of this, currently with little consensus. Generally, AGI describes AI that is as intelligent as, or more intelligent than, humans. While use of such AI in surveying seems unlikely, given the importance of professional skill and judgement, it is captured by the definition of ‘artificial intelligence’, so use of the technology that has a material impact on surveying services falls within the standard.

‘Limitations’ of AI systems include, e.g., lack of common sense, lack of contextual understanding, reliance on the existence and reliability of large datasets, limited originality, ethical decision-making challenges, vulnerability to attacks, lack of transparency and explainability.

‘Failure modes’ of AI systems include, e.g.:

  • Confidentiality: improper access controls and unauthorized data access
  • Data: altered or corrupted data, loss of data provenance
  • Fairness: system output violates individual rights e.g. through inaccurate results or unfounded predictions
  • Bias amplification
  • Misinterpretation of instructions
  • Human-in-the-loop bypass

‘Hallucinations’ – where AI generates false or invented output - can be viewed as both a limitation and a failure mode, depending on the nature and extent of the hallucination.

The standard requires firms developing AI to record certain information in writing after developing and before general deployment of the AI system. This means that these requirements do not apply to beta testing, trials or sandboxing; though where such testing and trials involve clients, they should be fully appraised of the risks.

This is something firms will want to check within their own policies. Insofar as AI is used to support the delivery of professional services which are covered by PI, AI use is likely to be covered by PI unless it is specifically excluded. Firms should carefully check their PI policies to ensure that no such exclusions have been included without their knowledge.

If your firm is developing and selling AI tools, you may wish to consider obtaining product liability insurance, and/or limiting your liability in relation to those tools.