This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Trust in AI: getting your house in order
“Garbage in, garbage out” is a turn of phrase that most enterprises are now familiar with. Poor data quality can lead to incorrect or misleading outputs, undermining trust in AI and killing business opportunities.
According to Mike Capone, CEO of data analytics giant Qlik, success in AI begins and ends with data mastery: “Today, with the unlimited computing power and advances in generative AI available, we have the ability to spit out garbage at a breathtaking rate,” he warned.
Capone spoke in Florida last week at Qlik Connect, a gathering of industry leaders and developers from the vendor’s vast customer base. The key message at the event was the critical need to implement AI ethically and responsibly.
He explained that data quality determines any future ability to harness value from AI and analytics but recognised that managing data quality is one of the key challenges that his customers face and one that was preventing many of them from scaling AI use cases.
Capone underscored his point with a recent McKinsey survey that found over 70% of leading organisations said that managing data was one of their top growth inhibitors.
Establishing AI governance
How can businesses adopt AI responsibly? According to Meredith Whalen, chief research officer at market analyst IDC, who spoke at the event, companies can start by establishing an AI governance framework to help them balance pursuing new AI technologies with responsible development.
Whalen also suggested forming an AI council composed of diverse experts to provide guidance and develop practices for model transparency and data integrity.
“Our data shows that organisations are focused right now on model transparency guidelines and data integrity practices. That’s important because transparency and explainability are going to build trust among your users and among your stakeholders,” she said.
Whalen also highlighted the importance of regular employee training on ethics and responsible AI use, especially regarding security, which she recognises as everybody’s responsibility.
AI councils assemble
What are AI councils? Internally, it transpires that Qlik is hot on AI councils, essentially a group of diverse experts that provide guidance on AI implementation, focusing on model transparency and data integrity.
A council can also help ensure that AI development aligns with ethical standards and builds trust among users and stakeholders.
Tech entrepreneur and proclaimed AI expert Nina Schick, a member of Qlik’s very own AI Council, suggested that an AI council could also help verify AI-generated content to guarantee its integrity.
This kind of body could also facilitate discussions between industry and government to establish policies that balance innovation with fairness and societal impact.
According to Schick, AI councils can also bring together experts from various sectors to discuss AI’s future and identify necessary actions to promote responsible development and adoption.
Emphasising that computing power and data are the two essential resources driving the AI revolution, Schick believes that while computing power has increased exponentially, data is now the “new oil” that powers AI.
She argued that companies must optimise, codify, and consolidate their data to build sovereign AI and own the production of their proprietary intelligence. Schick claimed this would be critical to success in the age of AI: “All companies of the future, in my view, will be AI-first companies who build their sovereign AI.”
Implementing AI: 5 practical stages
The consensus at Qlik Connect around implementing AI responsibly and effectively boiled down to five practical steps.
The first involves assuring data Integration and quality control. To maintain high quality, all data needs to be integrated, transformed, and governed. Businesses also need to consolidate data from diverse sources, ensuring its integrity through rigorous quality control measures.
Only with high-quality data can AI yield reliable results. As Mayer said: “You can’t have an AI strategy without having a good data strategy.”
The second step is to form an AI council (as mentioned previously) to develop transparency and data integrity practices and provide regular ethics training for employees.
Understanding data provenance and maintaining rigorous governance are crucial. Transparency builds trust among users and stakeholders, who know that the data driving AI decisions is well-vetted.
The third step is to foster transparency and accountability, using metrics to build trust in AI systems. Equitable access to comprehensive, trusted data is essential. All stakeholders should rely on the same data pool to ensure consistency and reliability in AI outcomes. As Capone said: “You need access to complete and trusted data for everybody.”
Adopting an agile approach was the fourth key learning from this event. Companies need to ensure that they continuously learn and adapt policies as AI technology evolves.
This includes experimenting with new techniques while maintaining alignment with ethical and market needs.
“If you are in an organisation that’s risk-averse or hesitant to get started because you’re concerned about the risks of AI, the biggest risk is to do nothing. Your competition is out there experimenting,” warns IDC’s Whalen.
The final piece of advice, more of a prediction, comes from Qlik’s VP of market readiness, Martin Tombs. He suggested focusing on AI for specific business applications rather than generic AI models. These models, he added, will need to be validated continuously to build trust over time.
“Achieving trust in unstructured data is about keeping your blast radius short and focussing on your business. I predict we will start with more generic LLMs and then evolve into domain-specific LLMs. There’ll be ones specific to call centres, support teams, salespeople, etc. And trust will come when it’s an accurate answer.”
#BeInformed
Subscribe to our Editor's weekly newsletter