Connect with us

Hi, what are you looking for?

News

How businesses can use artificial intelligence for good

BRETT JORDAN-UNSPLASH

The hottest topic nowadays is the artificial intelligence (AI) chatbot called ChatGPT. Since November, the company OpenAI has allowed the public to directly converse with the AI tool, impressing users with its human-like answers to any question. We now see truly intelligent AI that can help us in ways we only previously imagined.

Is this really the case? I would say: “Not quite.” We must fully understand the proper use as well as the risks that come with this latest AI tool before embracing its use.

First, the problems caused by business use of earlier-generation AI algorithms have not been solved. Some examples:

• Social media and streaming service algorithms have led to addiction, depression, and social conflict among users

• Political operators have used social media algorithms to misinform, manipulate, and divide voters

• Self-driving algorithms in cars and planes have been linked to the deaths of several people

• Algorithms used for approving bank loans, hiring job applicants, and suggesting policing strategies and jail sentences have been shown to develop dangerous biases

AI has been deployed in deceptive ways that gave it too much credit for “intelligence” without sufficient regard for the risks involved for users or the public.

In the second place, I’m impressed with the seemingly knowledgeable outputs of ChatGPT. I usually discover factual errors when I check its answers for accuracy. For example, it repeatedly gave me the wrong way to format a journal article and attributed articles I never wrote to me. AI developers call these “hallucinations.”

And herein lies the problem: a large language model does not really “know” or “understand” anything, even when it appears to do so. Computer scientists “trained” the model to talk like a person by feeding it enormous amounts of human text data from various internet and digital sources. Computational formulas (algorithms) in the model calculated patterns and correlations based on the text data until it “learned” to produce human-like answers to questions asked of it. Thus, a language model is like a computerized parrot that mimics human speech by observing patterns in how people talk about various topics.

Remember that a language model is not intelligent even when it sounds like it is. It has no sense of the meaning, real-life context, underlying reasoning, or intent behind what it is saying. Worse, its output is affected by the errors and biases in the data fed into it; as they say: garbage-in-garbage-out. Hence, language models, or AI in general, cannot be trusted by themselves for important information needs or for making critical decisions.

Clearly, the government needs to regulate AI properly for appropriate business use. Meanwhile, businesses can maximize the benefits (Do good) and avoid AI use’s sins (Do no harm) by following four basic principles.

TO DO GOOD, BUSINESSES MUST:1. Educate AI users to fully exercise informed consent on the use of their data use to ensure their personal benefit. Businesses must explain how personal and other data are by AI to benefit the user, without overpromising such benefits merely to promote use. For example, such cautionary guidance is given to potential investors in financial products. This must also apply to AI use.

2. Use AI to promote human well-being. People need ways to improve their health, sharpen their critical thinking, and better understand others. AI tools like Pol.is, for example, enable people with diverse or opposing viewpoints to have conversations and find common ground.

TO DO NO HARM:1. Fully test the AI tool in various contexts of use to understand and mitigate any risks to users. Technology-based tools, from cars and power drills to computers and microwave ovens, have been meticulously tested by engineers to check for potential failures, safety issues, and other unintended harms to users. Such testing and safety protocols should apply to AI tools as well.

2. Fully warn users about the negative effects of AI and moderate against excessive use. The tobacco industry had to be forced by law to disclose that smoking is addicting and can lead to serious diseases. Such warnings should apply to AI tools. Hans-Goerg Moeller, a professor of philosophy and YouTuber, issues this excellent warning at the end of each of his videos: “This video is produced to attract your attention and to promote this channel. The platform you are using is designed to be addictive and to mine your data for profit.” All businesses should issue such warnings as they apply.

3. Television host John Oliver summarized my main point very well: “The problem with AI right now isn’t that it’s smart. It’s that it’s stupid in ways that we can’t always predict.” If we remember this simple fact, we can be critical users of AI.

Benito L. Teehankee is the Jose E. Cuisia professor of business ethics at De La Salle University.

benito.teehankee@dlsu.edu.ph

Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

Latest

News

The Independent Electricity Market Operator of the Philippines (IEMOP) is holding a big conference, the Philippine Electric Power Industry Forum (PEPIF) 2023, with the...

News

TIM MOSSHOLDER-UNSPLASH TECHNOLOGY is moving at lightning speed. In my lifetime alone, the world as we know it has orbited into a completely different...

News

A man wearing a protective mask is reflected on an electronic board displaying various companies’ stock prices outside a brokerage in Tokyo, Japan, Feb....

News

KREMLIN.RU/EVENTS/PRESIDENT/NEWS/60363/PHOTOS-COMMONS.WIKIMEDIA.ORG SEOUL — North Korean leader Kim Jong Un has overseen drills simulating a nuclear counterattack against the US and South Korea in a...

News

RAGHAVENDRA V KONKATHI-UNSPLASH LONDON — Some of the world’s leading makers of flu vaccines say they could make hundreds of millions of bird flu...

News

BW FILE PHOTO Productivity and customer experience management tools are the most sought-after tools among companies in the Philippines, with businesses of all sizes...

You May Also Like

News

COVID-19 has had a significant impact on the mental health of Filipinos across different groups all over the archipelago. From frontline workers, parents balancing...

News

REUTERS By Luz Wendy T. Noble, Reporter The country’s foreign exchange buffers slightly increased as of end-October as the value of the central bank’s...

News

BW FILE PHOTO GROSS BORROWINGS by the National Government reached P2.6 trillion as of end-September as it continued to raise funds to respond to...

News

KARASOLAR.COM TENA, Ecuador — Ecuador’s rainforest Achuar people say their ancestors long dreamed of a “fire canoe” or “electric fish” that would let them...

Disclaimer: Respect Investment.com, its managers, its employees, and assigns (collectively "The Company") do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

Copyright © 2022 Respect Investment. All Rights Reserved.