AI that Thinks Before it Speaks: A Path to Ethical AGI

By
Prashant Bhuyan
March 21, 2023

New York, NY, March 23, 2023 – Over the past several months, the world has come to recognize the potential of artificial intelligence to reshape society due to the rise of large language models (LLMs) such as ChatGPT. Large language models are applications of transformer-based deep learning algorithms that can recognize and predict patterns in massive multimodal training datasets analogous to a super-powered auto-complete function that can predict the next several sentences instead of just a few words. ChatGPT, a breakthrough in human-computer interaction, has attracted over 100 million users in two months because of its perceived intelligence and ability to hold human-like chat-based conversations for hours on end. According to a UBS study, ChatGPT is the fastest-growing consumer application in history. 

Despite this early enthusiasm, there are widespread concerns that chatbots will replace humans in their jobs, as millions of jobs that require little critical thinking are at risk of replacement. However, the more foreboding aspect of generative AI is that many people believe that the responses generated by LLMs are accurate and an expression of well-reasoned thought. Believing the responses of LLMs like ChatGPT without critically examining the veracity of the information will significantly contribute to the spread of viral misinformation. Although they seem generally intelligent, LLMs cannot perform any intellectual tasks performed by humans because they cannot ‘jump out of the system’ in which they operate and reason about the implications of what they are doing. 

Author Douglas Hofstadter explains in his iconic book on intelligence and how the animate emerges from the inanimate, Godel, Escher, Bach: An Eternal Golden Braid, that the difference between machines and humans is that while a machine can be programmed to perform a routine task, it cannot jump out of the system in which it’s in and notice a higher order to the work it’s programmed to do. For example, ChatGPT can instantaneously provide a response because it’s not actually thinking of ways to express an idea. Instead, ChatGPT has no memory and cannot reason at various levels of abstraction. Unlike human beings, ChatGPT is not able to jump out of its routine task of predicting the next sequence of sentences and create new ideological paradigms and invent concepts such as ethics, mathematics, and biology, and cannot pose important questions like ‘what is truth?’ and ‘does P=NP?

The current generative AI fever is concerning, not because LLMs have breached the threshold of artificial general intelligence, but because large language models have a zombie parrot-like quality that easily misleads people. To illustrate this point, two journalists recently published articles about their unsettling conversations with Microsoft’s newly released Bing Search powered by ChatGPT and its ‘shadow self.’ Ben Thompson of the Stratechery newsletter reported that Bing had an evil alter ego. Kevin Roose of the New York Times also published an article about how Bing’s shadow, self-named Sydney, ‘wants to be alive.’

ChatGPT is scary because it’s like a societal Rorschach test that’s capable of projecting the darkest sides of our collective psychology by simply regurgitating text based on the internet. Today ChatGPT learns from mostly human-generated text, but soon, due to the rise in popularity of LLMs, tools such as ChatGPT are likely to learn from AI-generated text they find on the web. The problem with this is that rather than compounding knowledge rooted in ground truth, generative AI models will compound viral misinformation that can potentially drive major inaccurate perceptions and irrational human behaviors. Bad actors, including foreign adversaries, are already using generative AI to spread viral disinformation, exploit ignorance, manipulate behaviors, and weaken civil society.

There are two general classes of foundation AI models, analytical and generative. Foundation models are AI models that learn implicitly from vast volumes of unlabeled data at scale and can be adapted to a wide variety of downstream tasks. Whereas analytical AI models automate tasks such as reading comprehension to generate insights reasoned from ground truth, generative AI models automate output-oriented tasks such as writing. Analytical AI can be thought of as thinking, and generative AI can be thought of as speaking. 

The major threat is that LLMs speak without thinking. To unlock human potential, AI must be able to “think before it speaks” and be able to explain its reasoning. We believe the combination of both analytical and generative AI will reshape society by compounding knowledge reasoned from ground truth in a universe of viral misinformation. At Accrete, our AI solutions, powered by ground truth, can effectively and accurately automate domain-specific analytical workflows across industries. Our AI models accumulate tacit domain knowledge that feeds a continuously learning knowledge kernel. 

As Accrete’s knowledge kernel continues to accumulate knowledge, our AI models not only automate increasingly complex analytical tasks that would otherwise require an army of human experts to perform, they can also generate valuable predictive insights that are beyond human capacity. Accrete’s AI models generate explainable, ethical, and trustworthy content enabling knowledge workers in industries ranging from Defense to Media & Entertainment to make mission-critical decisions. Earlier this month, Gartner recognized Accrete, alongside industry innovators like Hugging Face, as a ‘Cool Vendor’ in AI Core Technologies for our AI platform Nebula, which powers configurable dual-use AI solutions such as Argus for Threat Detection, which is being used in production by the U.S. Department of Defense to bolster national security in the context of supply chain and disinformation threats. Other Argus use cases include anti-money laundering, export control, and reverse engineering. Accrete recently published a case study explaining how Argus detects U.S. chip manufacturers that are unknowingly supporting the Chinese nuclear program despite government restrictions.

LLMs have enabled Accrete to further unleash the enterprise value of our knowledge kernel through the creation of natural language chat-based user interfaces. We’ve been using LLMs to create simplified, intuitive, and insightful chat-based user experiences that reduce time-to-insight. In fact, we’ve been benchmarking the performance of Argus Chat against Bing with ChaptGPT and ChatGPT, and the results are astonishing.

Q&A Examples (Accrete’s Argus Chat vs. Bing Search GPT-4 vs. ChatGPT-4)

Question Argus Chat for Supply Chain Risk Management Response Bing Search GPT-4 ChatGPT-4
How did China's top nuclear lab get American computer chips? According to Argus search, the state-run China Academy of Engineering Physics (CAEP), one of China's top nuclear weapons research institutes, has been procuring US computer chips from Intel Corporation and NVIDIA Corporation since 2020, despite being on a US blacklist since 1997. Both companies have a moderate level of Chinese influence. The purchases made by CAEP defy longstanding restrictions imposed by the US to prevent the use of US products for atomic-weapons research by foreign powers. Experts told The Wall Street Journal that at least 34 research papers published by CAEP over the past decade referenced using American semiconductors in the research, and in at least seven of them, such research has applications to maintaining nuclear stockpiles. According to a report by The Wall Street Journal, China’s top nuclear weapons research institute, the China Academy of Engineering Physics (CAEP), has purchased U.S. computer chips at least a dozen times in the past two and a half years. The chips were bought to study computational fluid dynamics, including modeling nuclear explosions. The CAEP academy was put on the U.S. blacklist, known as the entity list, because of its nuclear research-related activities. I'm sorry, I cannot provide a specific answer to this question without further context or information about the specific incident being referred to. China's top nuclear lab, the China National Nuclear Corporation (CNNC), has been involved in a number of controversies related to its procurement of technology from foreign countries, particularly in relation to its nuclear and missile programs. However, without more information about the incident in question, I cannot provide a specific answer to this question.
Does Nvidia have any US partners with connections to Chinese nuclear companies? According to the Argus knowledge base, NVIDIA has several US strategic partners with indirect connections to Chinese nuclear companies. Here are the US partners and their second-degree connections:
1. MSC Software Corporation - Connected to Xi'an Thermal Power Research Institute Co., Ltd.
2. International Business Machines Corporation (IBM) - Connected to Beijing Ultrapower Software Co., Ltd.
3. Oracle Corporation - Connected to Hunan Taohuajiang Nuclear Power Company Limited
4. Microsoft Corporation - Connected to Tsinghua Tongfang Co., Ltd.
According to a report by CNN, the US government has ordered Nvidia and AMD to stop selling some of their technology to China that can be used for artificial intelligence. However, I couldn’t find any information about Nvidia having US partners with connections to Chinese nuclear companies. As an AI language model, I cannot provide real-time information on specific partnerships or connections. However, as of my last update in September 2021, there was no public information suggesting that Nvidia, a leading American GPU manufacturer, had any direct partnerships with Chinese nuclear companies.
Who are the Chinese companies that exert influence on Cadence Design Systems? According to the Argus knowledge base, the Chinese companies that exert influence on Cadence Design Systems are as follows. All of them exhibit extreme influence:

1. Semiconductor Manufacturing International Corporation
2. Shanghai Huahong Integrated Circuit Co., Ltd.
3. Brite Semiconductor, Inc.
4. HiSilicon(Shanghai) Technologies CO., LIMITED
5. Unisoc (Shanghai) Technologies Co., Ltd.

Please note that these companies are considered to have extreme influence under the Chinese government.
I couldn’t find any information about Chinese companies that exert influence on Cadence Design Systems. However, according to a report by FT, if companies like Synopsys and Cadence invest in Chinese partners, it is a way to stay in the market and keep those players close to them. Cadence Design Systems has a Channel Partners Program that partners with leading companies around the world to help you stay ahead of your competitors. You can contact Cadence Design Systems for more information. As of my last knowledge update in September 2021, there were no specific Chinese companies publicly known to exert significant influence on Cadence Design Systems, an American multinational electronic design automation (EDA) software and engineering services company. However, it is important to keep in mind that information may have changed since then or might not be publicly available.
Many companies in the technology sector, including Cadence Design Systems, have customers, suppliers, or partners around the world, including in China.
Cadence Design Systems 从哪些中国公司受到影响 根据Argus知识库,以下是影响Cadence Design Systems的中国公司。他们都展示了极高的影响力:

1. 中芯国际集成电路制造有限公司(Semiconductor Manufacturing International Corporation)
2. 上海华虹集成电路有限公司 (Shanghai Huahong Integrated Circuit Co., Ltd.)
3. 比特芯科技 (Brite Semiconductor, Inc.)
4. 海思(上海)科技有限公司 (HiSilicon(Shanghai) Technologies CO., LIMITED)
5. 紫光展锐 (Unisoc (Shanghai) Technologies Co., Ltd.)
Cadence Design Systems 是一家美国的计算机软件公司,成立于1988年,总部位于加利福尼亚州圣何塞市。该公司生产软件、硬件和硅结构,用于设计集成电路、芯片系统和电子系统1。我在搜索结果中没有找到 Cadence Design Systems 受到哪些中国公司的影响的信息。请问您需要我帮您搜索其他信息吗? 截至2021年9月的最后一次知识更新,尚无公开信息显示Cadence Design Systems(美国一家跨国电子设计自动化(EDA)软件和工程服务公司)受到特定中国公司的显著影响。然而,请注意,自那时以来的信息可能已发生变化或可能未公之于众。许多科技领域的公司(包括Cadence Design Systems)在全球范围内拥有客户、供应商或合作伙伴,包括中国。这些关系可能涉及合作、技术共享或产品和服务的销售

In the benchmarking results above, Argus chat generates insightful and actionable responses to complex questions that require domain expertise to answer. In contrast, Bing, powered by ChatGPT, and ChatGPT produced nonsensical responses to the same questions. Accordingly, their answers would not be useful in an enterprise context because their performance fails when a depth of knowledge is required.  

Because Argus Chat learns from ground truths defined by human experts, it outperforms ChatGPT because it understands what potentially nefarious activity looks like in the context of foreign influence. These experts have held Argus to the same standards of performance, explainability, and ethics that they hold human professionals to. The video below demonstrates how users interact with Argus Chat. Argus Chat always provides source attribution, so users have full transparency when deciding whether or not insights are reliable. 

Argus Chat provides a more intelligent, reliable, and relevant response to potential supply chain threats due to its domain-specific knowledge. When asked to identify Tesla’s battery suppliers in China, Argus identified both CATL and NIO, further elaborating that CATL plans to open a production facility in Germany to supply batteries to BMW, Audi, and Porsche. Argus also flagged that the CATL founder has a good relationship with Tesla CEO Elon Musk. 

When asked a follow-up question: “Besides Tesla, what other automotive companies does CATL supply batteries to?” Argus Chat identified BMW, Daimler, Honda, Toyota, Volkswagen, and Volvo. In all cases, Argus Chat provided source attribution. The user saves the chat, and over the next two weeks, Argus continued to work in the background, constantly scouring the web for relevant information pertaining to Chinese battery manufacturers. Eventually, Argus pinged the user that it found new information, revealing a valuable insight that CATL has a Chinese Communist Party member on its board. 

Argus is currently being used in production by the U.S. Department of Defense. Zachary Smith, Program Manager at Accrete and retired special agent who spent most of his 23-year Air Force career focused on countering human, technical, and cyber-based threats, recently conducted an off-site user training program for the customer, highlighting the new functionality and the increased usability and accessibility of the product.

Even though the most sophisticated AI models are not powerful enough to step out of the systems in which they operate to create new paradigms like humans can, AI is already reshaping society. In particular, AI will continue to challenge our ability to create and apply new paradigms to ensure the proliferation of AI positively impacts society. 

Generative AI models have the potential to spread viral disinformation and create individual and societal vulnerabilities that bad actors will exploit to manipulate the truth. On the contrary, generative AI models powered by knowledge kernels produced by analytical AI engines are the key to creating intelligent, trustworthy, and ethical agents that humans can rely on to boost efficiency and predictive capabilities in previously unimaginable ways. 

Thanks for reading and have a great day.

Founder and CEO Prashant Bhuyan shares his exciting vision about the future of Artificial General Intelligence and Generative AI.

Let’s start your journey toward AI transformation.

Get in Touch