Connect with us

Hi, what are you looking for?

Editor’s Pick

UK government and King Charles’ safety concerns highlight the importance of AI ethics

<?xml encoding=”utf-8″ ?????????>

While we should undoubtedly proceed with care and caution, underpinning AI deployment with good data allows organisations to balance regulatory and moral risks, says Yohan Lobo, Industry Solutions Manager, Financial Services at M-Files

AI safety and security has been a hotly discussed topic over recent weeks − numerous high-profile figures expressed concern at the rate of global AI development at the UK’s AI Safety Summit, held at Bletchley Park.

Even King Charles weighed in on the subject when virtually addressing the summit’s attendees stating, “There is a clear imperative to ensure that this rapidly evolving technology remains safe and secure.”

Additionally, in his first King’s speech delivered on Tuesday where he set out the UK government’s legislative agenda for the coming session of parliament, King Charles explained the government’s intention to establish “new legal frameworks to support the safe commercial development” of revolutionary technologies such as AI.

Yohan believes that avoiding the pitfalls brought to our attention at the summit and in the King’s Speech hinge on organisations leveraging AI solutions that are built on a foundation of high-quality data.

Yohan said: “Mass adoption of AI presents one of the most significant opportunities in corporate history, which businesses will do their utmost to cash in on, with this technology capable of delivering exponential increases in efficiency and allowing organisations to scale at speed.

“However, concerns rightfully raised at the UK’s Global AI Safety Summit and reinforced in the King’s Speech demonstrate the importance of developing AI ethically and ensuring that organisations looking to take advantage of AI solutions consider how they can best protect their customers.

“Data quality lies at the heart of the global AI conundrum – if organisations intend to start deploying Generative AI (GenAI) on a wider scale, it’s vital that they understand how Large Language Models (LLMs) operate and whether the solution they implement is reliable and accurate.

“The key to this understanding is having control over where the data the LLM gains its knowledge from. For example, if a GenAI solution is given free rein to scour the internet for information, then the suggestions it provides will be untrustworthy, as you can’t be sure whether it has come from a reliable source. Bad data in always means bad language out.

“In contrast, if you only allow a model to draw from internal company data, the degree of certainty that any answers provided can be relied upon is significantly higher. Any LLMs grounded in trusted information can be incredibly powerful tools and a guaranteed way of boosting the efficiency of an organisation.

“The level of human involvement in AI integration will also play a crucial role in its safe use. We must continually treat AI like an intern, even if a solution has been operating dependably for an extended period of time. This means regular audits and considering the findings of AI as recommendations rather than instructions.”

Yohan concluded: “Ultimately, companies can contribute to the safe and responsible development of AI by only deploying GenAI solutions that they can trust and that they fully understand. This begins by controlling the data the technology is based on and ensuring that a human is involved at every stage of deployment.”

Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!



Inflation likely eased further in November, a BusinessWorld poll showed. —...




COURTESY OF ICTSI MORE FOREIGN CHAMBERS are opposing the Philippine Ports...


METRO PACIFIC Tollways Corp. (MPTC) is looking to tap local and...


1 of 4 HONG KONG — Pharrell Williams took to Hong...


Talk Box By Kap Maceda Aguila 1 of 6...

You May Also Like

Financial Advisors


Financial Advisors


Financial Advisors


Disclaimer: Respect, its managers, its employees, and assigns (collectively "The Company") do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

Copyright © 2022 Respect Investment. All Rights Reserved.