| | | Effective PR

Financial adviser's AI back end is pants.

BIScom Subsection: 
Nigel Morris-Cotterill

A financial adviser has closed its "artificial intelligence" driven advisory system. The quality of advice and the supervision of the system were both causes for concern by the regulator in a landmark case about the deployment of computer-driven, what used to be called "expert", systems with implications across the entire spectrum of financial and other services including customer due diligence in relation to financial crime risk management. It's a potential game-changer for the rapid rise of lightly regulated fintechs.

"Digital advice, also known as robo-advice, is the provision of automated financial product advice using algorithms and technology, without the direct involvement of a human adviser," says Australian financial services regulator Australian Securities and Investments Commission (ASIC). ASIC wasn't happy with the system adopted by Australian Financial Services Licence holder Lime FS Pty Ltd. Lime's corporate authorised representatives, Plenty Wealth Pty Ltd (Plenty Wealth) and Lime Wealth Pty Ltd (Lime Wealth), are digital advice providers authorised to provide personal financial advice to consumers.

Plenty Wealth provided advice via "an online tool" about budgeting analysis, life insurance reviews, tax, investment and superannuation recommendations. Lime Wealth provided advice via "an online tool" about the establishment of self-managed super funds (SMSFs), purchasing property with superannuation, commencing and ceasing pensions, and contributions into superannuation.

ASIC reviewed a number of sample files and became concerned that the programs were not delivering the best advice and were concerned that Lime (as licence holder) did not have sufficient supervision over what the programs were saying.

The problems raised by ASIC are not peripheral. In fact they are absolutely central to the question of financial Know Your Customer for the purposes of presenting financial advice: "ASIC was concerned that the level of inquires made by the online tools about client objectives, financial situation and needs, were inadequate. In some instances, the recommendations generated by the tools were in conflict with client goals or with other recommendations also generated by the tools." The use of the word "inquiries" is correct - done properly, the questioning of a prospective investor must be rigorous; "enquiries" is to use a far more social term. The system, ASIC considers, didn't ask tough enough questions and therefore could not be expected to provide fully comprehensive and best advice.

Lime accepted ASIC's concerns and has agreed, voluntarily, to close both of its the tools used by its representatives. It is not clear when, if at all, they will come back into use and if they do in what form.

ASIC Commissioner, Danielle Press said, “Digital advice tools offer a convenient and low-cost alternative to consumers who may otherwise not seek personal financial advice. However, the advice provided through these tools must meet the same legal obligations required of human advisers – the advice must be appropriate to the client and comply with the best interests duty. ASIC expects AFS licensees and financial advisers using or recommending digital advice tools to ensure that they adequately monitor and test the advice for quality and appropriateness."

That's all well and good but it is vital that this case is not seen as being in a silo: it is not about financial advice - it's about delegating functions to machines and, in particular, the screening of customers.

As so-called regtech is deployed in relation to screening (however that is defined) the same questions arise. And the implications for fintech are enormous for the simple reason that most fintech companies are predicated on the assumption that code will replace people and to facilitate, in particular, an quick, cheap and easy path to approving new customers in a non-face to face environment.

That, ASIC has said, cannot be the case. It is clear that machines require supervision just as much as people do, including as to quality of results.

Of course, there are those who will say that the problem is not the machines but it's the quality of the algorithms and developing the questionnaires. That's absolutely right and here both financial services businesses and regulators need to understand exactly what the use of such machines means. Unless the questions are set in-house and unless the algorithms are developed by in-house risk managers and unless the process by which people interact with the system then it's outsourcing. It makes no difference whether that's done in a remote call centre or a server in the basement - it's still outsourcing because the basis of the decisions is determined by someone outside the company's management structure.

The case should cause considerable interest amongst regulators and risk and compliance officers in financial services businesses. Those jurisdictions which have worked hard to become "fintech hubs" need to think long and hard about the effect of this case.

The case has not ended up as a contentious matter and some will take heart from that and form the view that until someone has been penalised there is no reason to change; some regulators which have deliberately set out to adopt light-touch regulation may decide that they will ignore it. In due course, perhaps that attitude will lead to reputational risk for those that have thrown open their doors to all and sundry with the hope of reigning in excesses later.

--------- Advertising ----------

| |