AI can be biased, which is why it needs humans to teach it best practices.
AI can be biased, which is why it needs humans to teach it best practices.
AI can be biased, which is why it needs humans to teach it best practices.

Keeping the balance against AI biases

Keeping the balance against AI biases

Dan Baird

May 1, 2020

We’ve all heard the stories of how badly AI can get things wrong. Just witness MS Word’s spell and grammar checker. And the content bubble that forms around you on Facebook and other social media platforms.


Perhaps now more than ever even lay people can notice the power of AI. We’re stuck at home and using our smart devices to work, study or play. More of us will also notice AI’s peccadilloes.  


Algorithms are only as good as the humans who create them. And humans can try, but can’t prevent, having their own biases. AI develops its own, which can lead to wrong decisions that can be disastrous if the humans who make them rely solely on AI data and don’t balance the biases. 


This is AI’s weakness. Machine learning can angle toward a bias as it takes in information. That’s natural. But the fact that AI can get things wrong shouldn’t prevent us from using it. To the contrary, we need to continue using AI so we can be aware of what can go wrong. 


We discover, anticipate and intercept. 


AI should never be 100% in control. It shouldn’t be. Especially in marketing and sales where your goal is to communicate with human customers. We still need humans (human-in-the-loop) to make decisions, and judgment calls, and to pick up and execute on nuances. 


Understanding the weakness of intelligent systems


It’s important to understand AI so we can utilize it properly and avoid potential faultiness. 


AI does research on a scale humans can’t. That’s why we need it. 


We only need to remember that AI researches what we tell it to, and it uses data already available unless we teach it to build better data sets from more data inputs. 


AI is naturally biased. Why? Because AI functions and builds its knowledge through avenues with inbuilt bias. 


Data


AI starts and ends with data. It uses data to deliver data. 

The issue is that data will always be dated and influenced by other data. This is where your AI might inadvertently become biased because we have data that backs up biases, from the innocuous to the egregious.


Interaction


Siri and Alexa are both AI that leverage user interaction to become a smart assistant. Even Gmail depends on interaction to know what emails people open and deem safe. 


In March 2016, Microsoft launched Tay, a chatbot for casual and playful conversation. Within 24 hours, users turned Tay into a racist by feeding it with racist remarks. 


This type of machine learning is perhaps the most prone to bias, and it needs consistent tempering to truly be intelligent. For instance, it’s annoying when Gmail insists on marking an email as unsafe, even though you know it’s safe, and smart homes can be annoyingly presumptuous in turning off the lights at certain times or when you haven’t moved, right? Do you need to flap your arms now and then when you’re reading? 


Personalization


More widely known now because of recent scandals, you can get stuck inside an information bubble when algorithms dependent on personalization go unchecked. 


Also known as “confirmation bias,” it uses your likes, opens, purchases, and shared content to confirm the content you want. The AI algorithm will keep feeding you that same content, related to your “confirmations” and queries. 


From a business perspective, a big store can lose sales if its AI keeps pushing baby items to a man who bought baby clothes once. For a baby shower. For someone else’s baby. 


Goals


This is where Google has finally cracked down. It used to be that pages with the most views and clicks dominated the search results, even though they were low-quality resources for the queries. 


Goal-biased AI serves up content that achieves goals (clicks and revenue from clicks) instead of useful content. Machine learning can drift toward stereotyping simply from aiming for click-through behavior. 


AI unchecked leads to faulty conclusions


Knowing the biases above, it’s a big mistake when organizations assume their data is already complete and accurate for their market. This leads to the AI data naturally developing bias from relying on data of the organization and its customers’ past behaviors and inputs.  


In turn, this leads to the greater threat of decisions being made or content being pushed based on this biased data, leading to lost opportunities, overlooked weaknesses, or outright embarrassing situations. 


According to DataRobot research, 42% of AI professionals in the US and UK are “very” or “extremely” concerned about AI bias. 


I’m quite surprised that the percentage isn’t bigger. 


Balancing AI biases


Perhaps the concern isn’t as high because AI’s help is tremendous in comparison to its potential faults. Companies that use AI saw a 50% increase in their leads, according to Harvard Business Review study in 2016.  


AI takes care of a mountain of data retrieval, analyses and organization that would take a human team years or even decades to accomplish. It gives marketing and sales teams clear insight so they can establish real connections rather than praying for luck with guesswork. 


Every business should maximize the benefits of their AI technology through consistent and proactive human direction as a check against biases. 


Feed your AI a balanced diet of information


Truly intelligent AI needs as many varying inputs as possible. Combining these results gives better insights. For example, Wrench doesn’t just look at an organization’s data, but looks outside and around, for complex, and always updated insights. 


You need human involvement


Sales and marketing teams are meant to use AI, not to be replaced by AI. Even chatbots are fed by human content creators. 


A prime example of humans saving the day is Google’s “Smart Compose”, introduced in May 2018. Smart Compose “predicts what users intend to write in emails and auto-completes sentences.” Six months later, developers recalibrated it to stop suggesting gender-based pronouns. You can get “you” or “it” but never “him” or “her.”


They recognized AI weakness in historical, biased data: i.e., prompts for meeting an executive and using “he” or “him” because AI assumes the executive is male. 


It was a human research scientist who prompted the change. He typed, “I am meeting an investor next week.” and saw Smart Compose suggest this follow-up question: “Do you want to meet him?” The investor was a woman. 


While not all errors are equal, getting gender wrong, especially in a high-stakes context, can be very poor form. 


AI data alone should not determine decisions


While humans are not ideal decision-makers because of our own biases, neither is AI. The insights your AI delivers should help but should never be the only determining factor in decisions. 


Consider cinematic AI, which can only access data on previous blockbusters and can completely miss the potential in films that don’t match the criteria of what AI has found to be “blockbuster material.” Where would we be if human producers only made decisions according to AI’s suggestions? 


Consider the AI of most sales tools. Aside from the obvious bias through historical sales data, lead scoring can also make organizations conflate lead generation and lead scoring. AI spits out the leads that fit customer personas, and then companies make assumptions that these are their ideal customers. 


Sales and marketing teams can immediately recognize the need to confirm and bring about lead readiness first. 


Historical sales data is also only a minor, dispensable detail, since human teams understand the changing market as it happens, and can therefore send the right outreach and advertising to convert those whom AI lead scoring deemed unsuitable leads. 


The human aspect


We have AI, too. So to speak. Gut instinct, a shortcut we don’t even notice, intuition for those quick decisions and conclusions we can make without even thinking about it. This comes from experience, human creativity, and empathy. It comes from company values, personal integrity, and market dynamics. 


Your sales team’s ability to read between the lines of a dubious customer’s comments — that’s not something AI can do yet. All this information is — to use a current buzzword — transmitted from human to human, and is not accessible to AI.   


But we could really use AI because we, as humans, are not the best decision-makers. We just need to understand that the AI models we create can be just as faulty. We need to remember that the integrity of AI stands on foundations we ourselves build. Sometimes these foundations are solid, and sometimes they’re little more than mud. 


As we, as a society, continue to evolve as a whole, so will AI. And there’s no denying the power of AI in processing the data that is available to it. 


So human teams should continue to utilize AI. AI bias can stick to little-suspected places, where you think AI should be or is doing well. In using AI, we should seek to discover biases and consistently correct them, teaching AI to be better. That’s how it should be.

Data-driven A.I. personalization driving acquisition, growth, and engagement.

Contact us for consultation

Get In Touch

Hours: Mon-Fri 9:00AM - 5:00PM

© All Rights Reserved.

Data-driven A.I. personalization driving acquisition, growth, and engagement.

Contact us for consultation

Get In Touch

Hours: Mon-Fri 9:00AM - 5:00PM

© All Rights Reserved.

Data-driven A.I. personalization driving acquisition, growth, and engagement.

Contact us for consultation

Get In Touch

Hours: Mon-Fri 9:00AM - 5:00PM

© All Rights Reserved.

Data-driven A.I. personalization driving acquisition, growth, and engagement.

Contact us for consultation

Get In Touch

Hours: Mon-Fri 9:00AM - 5:00PM

© All Rights Reserved.