Adnan Zai, Advisor to Berkeley Capital, is on the cutting edge of the financial world. We sat down with him recently to discuss the impact that ChatGPT is bound to have on the world of finance, as well as life in general. Although Zai understands the need for streamlining work and moving forward with technology, there are still some questions about the magnitude of ChatGPT’s impact on the future.

Kraven: This whole story started when OpenAI created a non-profit project to aid researcher and institution collaboration, and entrepreneurs like Elon Musk and Peter Thiel backed the project to push artificial intelligence to its limits. OpenAI became a for-profit business in 2019. But some of these same supporters that opened Pandora’s box are now questioning the implications of ChatGPT, and thousands of tech leaders and prominent public figures have signed a petition to slow down the speeding train.

ChatGPT stands for Chat Generative Pre-Trained Transformer. In plain English, it is an AI chatbot that uses natural language and is far beyond simple voice assistants like Siri. ChatGPT trains on a wide range of articles, websites, books, and social media, but it also trains with humans. After its launch in November 2022, over 100 million userswere using it by January 2023, and the number continues to grow.

“Over the course of a single week in early 2023, employees at the average 100,000-person company entered confidential business data into ChatGPT 199 times, according to research from data security vendor Cyberhaven.”The reach of ChatGPT is already profound. This is the most quickly adopted tech software ever created. Based on the questions of the creators of ChatGPT and the speed with which it has taken the world by storm, do you think people are smart to be concerned about it?

Adnan Zai: This tool is very good at what it does. But the speed in which it has been rolled out is problematic. Companies and individuals are jumping on board without fully thinking through the ramifications. The moral issues could be far-reaching, and no one knows how far things can go. Already, realistic voice imitators and “deep fake” videos are wreaking havoc with the status quo. When ChatGPT is pulling information from its wide variety of sources, it begs the question of whose agenda is behind the particular sources it chooses. This just opens the world to the possibility of nefarious characters pushing their own agendas on an unsuspecting public. And people won’t even know it is happening.

Kraven: Yes, this has many people worried. In today’s world it is much harder to detect if something is real or not.Forbes Magazine reports that “This is because the underlying technology known as natural language processingor natural language generation (NLP/ NLG) can easily mimic written or spoken human language and can also be used to create computer code.”

Even the creator of ChatGPT himself, Sam Altman, is worried. “We’ve got to be cautious here. I think it doesn’t work to do all this in a lab. You have got to get these products out into the world and make contact with reality. Make the mistakes when stakes are low. But all of that said, I think people should be happy that we are a little bit scared of this.”

Do you think we are in big trouble?

Adnan Zai: Not necessarily. OpenAI, creators of GPT-3 and ChatGPT have included their own rigorous safeguards to prevent malicious acts. They have many filters in place, looking for specific phrases that point to problems. But each person and company need to ensure that they are vigilant in these matters. Altman is wise to sound the alarm, but there are elements in place to fight the problems too.

Kraven: The FBI’s Internet Crime Report explains that phishing is the most common IT threat in America. But up to this point, it has been easy to spot spelling and grammar mistakes in phishing emails. Now with the ChatGPT skills that are so easily fluent in English, the game is changing. What do you believe should be done?

Adnan Zai: Companies need to be very cognizant that their IT teams are overturning every rock in order to stop the phishing attacks. Fortunately, the “ChatGPT Detector” technology already exists, and is likely to advance just as quickly as the ChatGPT itself. Companies need to train and retrain their employees. If you ask it to do something unethical, ChatGPT itself says it will only “assist with useful and ethical tasks while adhering to ethical guidelines and policies.”

The bad news is that some researchers think that the “bad actors” are already finding the workarounds to keep them one step ahead. And hackers are smart. They find the holes in existing code and can use ChatGPT to crash a system more easily.

Kraven: But the “good actors” can use it too?

Adnan Zai: Yes, ChatGPT can also create automated reports and summaries that can counter any attacks or threats and customize them to the appropriate audience, like to an executive of a company or the head of an IT department.

Kraven: Yes, I was just reading in the Harvard Business Review about a security firm Check Point in Israel who recently discovered a thread on a well-known underground hacking forum from a hacker claiming to be testing a malware chatbot. I would assume there are more where that one came from on the “dark web”. The bad news is that the “bad guys” are working hard at hacking, but the good news is that the “good guys” are stopping them.

Adnan Zai: One thing that people need to remember is that the power of ChatGPT goes both ways. If bad actors can hack into systems, then good actors can use ChatGPT to prevent or solve the problem.

Kraven: Yes, that is true if IT teams keep up with it. Let’s get back to Altman. Should we be more worried that the creator of ChatGPT is so cautious? When asked what is the worst possible outcome, Altman said there is a set of very bad outcomes. “One thing I’m particularly worried about is that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.” He, however, also said the technology could be “the greatest humanity has yet developed”.

When it comes to the large-scale disinformation he talks about, what are the ramifications of this being used on the political trail, especially in our already contentious political realm?

Adnan Zai: As I said before, ChatGPT is clever. It can make information seem objective while it is actually infused with bias or distortions. As far as spreading propaganda goes, ChatGPT is an extremely powerful tool. The current administration under President Joe Biden has created a “Blueprint for an AI Bill of Rights” which must be used and augmented because of the power of ChatGPT. There needs to be a wide variety of checks and balances in order to keep our country and our politics safe.

Kraven: The Harvard Business Review said: “Before a tool becomes available to the public, developers need to ask themselves if its capabilities are ethical. Does the new tool have a foundational ‘programmatic core’ that truly prohibits manipulation? How do we establish standards that require this, and how do we hold developers accountable for failing to uphold those standards?” The questions are intense and have far-reaching implications.

Adnan Zai: Now is the time to deal with the implications of these technological advances. We cannot afford to wait because hackers will always be one step ahead of us.

Kraven: We truly appreciate your time, Adnan Zai. And we will be vigilant about our use of ChatGPT moving forward.