Technology has proved a disruptive force in almost every industry, from listening to music to hailing a taxi. For the most, this has brought a myriad of benefits including a greater emphasis on the services provided and dramatically reducing inefficiencies in the market which in turn often results in a cut to the amount consumers pay. However, the recent announcement that Uber will not be issued a new private hire license in London brings with it a cautionary tale…
Dave Ramsey, Director of Data Standards, RICS
27 September 2017
The concern around unlicensed mini-cab drivers has long been discussed. Vulnerable people, late at night in a stranger’s car require as much protection as possible, regulatory or otherwise, within an industry that has a somewhat notorious history.
It therefore was somewhat unsurprising to read the reasons that lead to Uber’s private hire licence being revoked in London this month by Transport for London (TfL). Of the four reasons cited by TfL, three were related to corporate HR procedures or safety concerns. However, it was the final reason which stood-out from a data perspective; the use of Uber’s controversial internal software, Greyball.
Through the use (or misuse) of data from several disparate sources, Greyball was used by Uber to avoid providing its service to users it suspected of being government officials. Through the analysis of data, Uber was able to show a false version of its network to the suspected official so that they were unable to use the service.
The mechanisms allegedly used were a mix of geo-spatial, hardware identification, credit checks and usage patterns. Frequently using the Uber app without hiring a car, in a location close to a government building, using a phone that matched serial numbers known to be attached to cheap, replaceable mobile phones and with a credit card associated with a person connected to a government agency, would supposedly be likely to result in Uber refusing to provide the user with a ride. Should a driver be en-route, it might even allow the Uber worker to cancel the pick-up, something that they are usually unable to accomplish. Uber had previously admitted using Greyball during a period in Portland, USA, where they were suspected of avoiding taxi inspectors and therefore banned from operation.
Returning to London, the details are brief, other than TfL’s statement on the revocation. However, it is surprising that TfL is alleging the use of Greyball in the UK as an issue, given that Uber had a licence to operate in London and presumably would have no reason to avoid certain customers.
This raises the question, is the misuse of big-data, in this case geo-spatial information to deny services, such a black and white issue? In Uber’s case, it can certainly be argued that the data was used for the wrong purpose – that of regulatory evasion; but had software and data been used to re-route a charity door-to-door appeal, based around safety concerns of a dimly lit high-crime neighbourhood, we would have likely lauded the benefits the technology had brought.
As we have seen, TfL felt that corporate governance failures at Uber were serious enough to revoke the licence. Arguably however, these failures could very easily be addressed through new procedures and policies, modifying the way medical certificates were obtained or serious offences were reported.
When legal and corporate procedures are not adhered to, companies can expect to suffer through financial, and in serious cases penal, penalties. If a company mistreats customers’ private data in the UK, the Information Commissioner’s Office (the ICO) can fine an organisation up to 4 per cent of its global annual turnover.
However, the misuse of big data and open-data bears no such well-defined legislation and regulatory bodies must rely on existing laws to prevent companies from exploiting data.
For governmental data, there are guidelines in the UK. Two years ago, the Re-use of Public Sector Information Regulations 2015 came into force, bringing some changes for public bodies, cultural institutions and users of government information. However, the regulations don’t consider all data held by public bodies to be in scope.
As well as personal data, third party intellectual property and data that’s otherwise excluded (i.e. for national security), any data that’s being collected, maintained and provided “outside of public task” is outside of this scope.
This could be considered as ambiguous. As a way of an example, what happens when a dataset contains data that has been collected outside of an organisation’s public task? Should organisations define data collection and accessibility as part of their public task? For some organisations, such as Ordnance Survey and the Met Office, collecting and maintaining data this could be considered their primary mission (as the ODI discusses) - but what about organisations where data collection is considered a by-product of their public task?
The debate around the safety of licensed transport services is on-going. London provided 3.5m users to Uber, presumably most of whom felt safe enough to use the service and incentivised by the lower price achieved through less overheads, commercial directives, or arguably poorer worker/employee terms and conditions.
These overheads bring an increase to the cost of doing business, which resulted in business suffering for the Parisian and London taxi drivers; protests against Uber in 2014 and 2015 escalated to violence. Is this really the price of disruptive technology?
“All companies in London must play by the rules and adhere to the high standards we expect - particularly when it comes to the safety of customers. Providing an innovative service must not be at the expense of customer safety and security.” - London Mayor Sadiq Khan
Uber’s global chief executive Dara Khosrowshahi issued an open letter to apologise in the wake of the revocation and pledged to 'make things right' whilst appealing the decision, but it opens the question of how can we adhere and maintain our standards when technology disrupts at a faster pace than legislation can move?
Within 10 years we can expect to see fully-autonomous cars in cities driving as public taxis. Safety and background checks on these 'drivers' will be redundant but new checks must be brought in. Software versions, vendor certifications, ecological footprints and pricing structures will be higher on the regulatory list than crime reporting and medical checks for drivers.
Existing taxi drivers will continue to fight but their targets will diminish. Defining standards around the safety of driving – of which the practical element today is a manual task consisting of one driver and one examiner, must evolve.
In the future, as we have seen with the standards of the car manufacturing industry, the appraisal of the software in driving technology will come to the forefront. Organisations like the Euro NCAP, who are responsible for testing cars in physical crash-tests will move to software test-cases and issue data results in a similar way to the videos of the tests of today. Their existing assessment protocols are aimed towards a human readable form, but the industry will look towards repeatable, deterministic testing prevalent throughout the software industry today.
Generally, technology leads and regulation follows. We discovered how to make fire before we invented the fire-extinguisher. In this fashion, we must not stand in the way of progress with over-regulation but accept that mistakes will be made and preventative measures added retroactively. These then lead to strong, usable standards. Uber, and its forthcoming descendants aren’t going away: they are paving the way for regulation and ultimately, we will adapt to see technology working alongside the regulatory bodies.
The World Built Environment Forum facilitates industry leading discussions harnessing the enormous potential of the 21st century's people and places.