The financial services industry is inundated with an increasing amount of data – ranging from a consumer’s banking transactions to analyst projections for a stock price – and has turned to algorithms to cope by converting data into actionable insights. For example, an algorithm may look at a consumer’s credit card transactions and alert them that they’re spending too much money dining out each month. These kinds of actions convey a message to consumers that their financial service providers are using data to look out for them.

According to the research firm IDC, the big data technology and services market is expected to reach $48.6 billion by 2019. And companies are increasingly leveraging big data technologies to improve their return on investment on advertising and increase operational efficiency. But there is also an insidious side of big data, including discrimination, and here we'll consider why financial services companies may want to use data with a grain of salt. (For more, see: How Big Data Has Changed Finance.)

Making Assumptions About Consumers Based on Data

Big data technologies are designed to find patterns in vast amounts of data. For example, a credit rating agency may look at a person’s social media profile, job history and other public data to help banks target credit card offers to certain consumers. Proponents say these technologies can help reduce or eliminate bias by automating the screening process and removing any human bias, but critics argue that improper use can perpetuate, exacerbate or mask discrimination.

A recent White House white paper on big data technologies warned that companies should use big data with care to avoid potential problems. For instance, a technology that helps financial institutions make lending decisions by gleaning information from an individual’s social media connections to create a credit score could further cement existing disparities among consumers whose social networks are already among those groups who are disconnected or ignored by everyday lenders.

The Federal Trade Commission has already acted to prevent big data discrimination. In 2008, the agency settled with CompuCredit Corporation – a Visa and MasterCard marketing firm – after it failed to disclose its use of a behavioral scoring model that reduced some consumers’ lines of credit. It remains to be seen whether more investigations and/or regulations could be on the way to avoid discriminatory practices under the next administration.

Using Data to Support Decision-Making

Big data technologies may be proficient at finding patterns, but that doesn’t always translate to accurate conclusions. For example, credit card companies may rely on big data technologies to identify fraudulent transactions, but most people have had a legitimate transaction flagged at some point. And the same technologies being applied to approve individuals for lines of credit, for example, could end up denying qualified individuals.

Financial professionals should use big data technologies to support their decisions rather than relying on them to make decisions outright, and always err on the side of caution if there are any doubts. In addition, many companies recognize the need to exercise caution when building big data models to avoid regulatory penalties. Well-designed data models use representative data sets that are continuously tested to ensure accuracy and account for biases.

While there are risks involved, big data is becoming an increasingly important necessity in today’s business environment. For example, Morningstar’s HelloWallet leverages big data to help employers advise their workers on ways to maximize salaries, benefits and other resources, as well as improve communication over time. (For more, see: The Big Play In Big Data.)

The Bottom Line

Big data plays an increasingly important role in the financial services sector, where it’s used for everything from targeting advertisements to optimizing portfolios. While these technologies have many benefits, critics are quick to point out that they can also become a source of discrimination if they're developed and/or used in an improper way. Regulators have become increasingly aware of this potential for bias and the financial services industry should exercise caution to ensure compliance as its troves of data continue to grow. (For more, see: How Big Data and Artificial Intelligence Affect Investing.)