Cookie Consent by Free Privacy Policy Generator website
top of page
Search
  • Writer's pictureGeopolitics.Λsia

Unearth the AI: GPT4 Will be More Intriguing

Executive Summary: Artificial intelligence (AI) has made significant progress in recent years, with several notable developments and breakthroughs. For example, ChatGPT, a language model developed by OpenAI, achieved 1 million users within 5 days of its launch. Image generators Dall-E and Midjourney also captured headlines for their ability to generate images beyond expectations. In 2016, Google DeepMind's AlphaGo made headlines by defeating South Korean professional Go player Lee Sedol in a 4-1 victory, leading to a decrease in job opportunities for humans. The current largest AI language model is PaLM-Coder Minerva's PaLM, with 540 billion parameters. However, rumors suggest that GPT-4, developed by OpenAI, will have 100 trillion parameters or more. GPT is a type of artificial general intelligence (AGI) that is trained and used in versatile tasks, rather than being limited to specific tasks like image recognition or autonomous vehicle control. It is important to consider the ethical implications of AI, including issues of bias, privacy, and the potential loss of jobs.


It has been 26 days since the launch of ChatGPT from OpenAI and it has already achieved 1 million users within 5 days. Earlier this year, both Dall-E (also from OpenAI) and Midjourney, AI-based image generators, captured headlines for several days due to their ability to generate images beyond expectations. In 2016, Google DeepMind's AlphaGo made headlines before by defeating South Korean professional Go player Lee Sedol, who holds a 9 dan rank, in a 4-1 victory.


AI Winter is No More The move of Black 37 in game 2 stunned everyone, including Lee, who had to temporarily leave the room. It went against thousands of years of human ancient wisdom. The question at hand is how we define the term "intelligence". According to Max Tegmark, it is "the ability to accomplish complex goals". Go was previously thought to be in an "unbesieged realm" of human wisdom where it was believed that machines could never be superior. However, AlphaGo's defeat of Lee demonstrated that machine "intelligence" has dramatically increased, leading to fewer job opportunities for humans.



AI breakthrough moment: model size of popular new Machine Learning systems between 1954 and 2021, from Vega (see dataset and colab)


With several notable developments in AI this year, it seems that the so-called "AI winter" is over and there have been major breakthroughs. Some scholars have compiled a list of 32 outstanding AI papers, but the real story is the exponential growth in AI's parameters. GPT3.5, which is used in ChatGPT, was trained on 300 billion tokens or word fragments, resulting in 175 billion parameters. The impressive part is not just the large number of parameters, but also the diverse range of data that was input into the model, including Web Data (410 billion), WebText2 (19 billion), Gutenber (12 billion), Bibliotik (55 billion) and Wikipedia (3 billion). In one of our experiments, ChatGPT was able to accurately identify specific "tables" and "images" and their locations in a specific academic paper.





The largest AI language model currently is PaLM-Coder Minerva's PaLM at 540 billion parameters. There are rumors that GPT-4 will have 100 trillion parameters or more, which is over 500 times larger than GPT-3. This is a significant increase from the 117 million parameters in GPT-1, and the 345 million, 762 million, and 1.5 billion parameters in GPT-2, which was then enhanced to 175 billion parameters in GPT-3. When compared to other models in development, it's clear that we are at a threshold or breakthrough moment in AI. If you have been impressed (or perhaps even frightened) by recent AI capabilities, it's only the beginning.


GPT is a type of artificial general intelligence (AGI) or AI that is trained and used in versatile tasks, rather than being limited to specific tasks like image recognition or autonomous vehicle control. Companies like DeepMind, OpenAI, Google Brain, and Microsoft Research are all working on developing AGI. GPT is a language model that is designed to hold conversations with end users. It uses natural language processing (NLP) techniques to store and process human language.


In NLP, knowledge is often represented in a structured format, such as a database or a knowledge graph, in order to facilitate the processing and analysis of natural language input. This can include information about the meanings and relationships of words and phrases, as well as rules and patterns for understanding and generating natural language.


The following is a simple Python code for inputting (or "memorizing") definitions of five specified identities using dot product that is often used in AI to measure the similarity between two vectors or to combine two vectors into a single scalar value:



This code will create a node for each identity that contains the POS tags of the words in the description of the identity. The output will look something like this:



From the above code, we can see that the longer the conversation with each user, the more dedicated computer resources will be needed from the cloud. Considering that ChatGPT has to respond to millions of users concurrently, it must be very efficient in order to be sustainable. The term "after training" refers to the input provided to ChatGPT during the conversation until it can reach the desired answer. If ChatGPT were to continue being supplied indefinitely "after training," it would eventually deplete all available resources. Therefore, it is important for ChatGPT to use resources efficiently in order to perform its function effectively and minimize the need for user input during "after training."



The table from this paper signifies accuracies on the duplication task of a 1-layer Transformer model with full attention and with locality-sensitive hashing attention using different number of parallel hashes. The efficiency and accuracy of the AI model aims to compare efficiency among models and it is not relevant to the accuracy of output inquired by the users which will depend on several factors, i.e. the model used in building the AI like Attention mechanisms or Word2Vec, including other constraints.


Furthermore, ChatGPT is not trained on all the knowledge in the world. While it is likely that ChatGPT has a base of well-known subjects and the ability to connect to a fundamental knowledge base to resolve user inquiries, it generally lacks the ability to provide nuanced interpretations in specific areas of expertise. For example, when asked about Alexandre Kojève, the Russian-turned-French philosopher who played a role in thinking about the concept of the "Latin Empire" and worked as a chief planner for the French Ministry of Economic Affairs, contributing significantly to the early formation of the European Union, ChatGPT bluntly insisted that Kojève had no involvement in the formation of the EU at all. It is therefore crucial to have PhD experts in the relevant subject area verify ChatGPT's output before it is used in publication or in real-world applications. However, in some cases, such as a debate among the Frankfurt School, Vienna Circle, Martin Heidegger, and Karl Popper, ChatGPT performed impressively, possibly due to the appropriate level of knowledge input by its creators.


AI Ethics


While ChatGPT performs well in scientific problems, especially those related to mathematics and computing coding, it tends to produce average output in social science and policy-related problems and performs poorly in subjects requiring creativity and aesthetics. This may be due to the nuanced and ambiguous definitions of such problems. If ChatGPT produces excellent output in these areas, it is likely either due to luck or close human supervision.


In social-related problems, ChatGPT rarely supports such tasks. Its responses to these types of requests may include warnings, explanations of general concepts, and/or outright refusal to fulfill the task. This is due to the ethical considerations surrounding the use of AI, including the potential for bias, lack of transparency, privacy concerns, impact on employment, and the need for accountability in autonomous systems. It is important to ensure that AI systems are trained on diverse and representative data, are transparent and accountable in their decision-making processes, protect personal data, and are used ethically and responsibly in order to address these concerns.


Several toolkits, including What-if, AI Fairness 360, Local Interpretable Model-Agnostic Explanations (LIME), and FairML, are used to address ethical concerns in AI. Additionally, there are several datasets available specifically for training on ethical issues, such as the AI Fairness 360 (AIF360) dataset, which helps researchers evaluate and improve the fairness of machine learning models, the Adversarial Examples for Evaluating Robustness (AE-Robust) dataset, which tests the robustness and generalization of machine learning models, and the Ethical AI Practices (EAP) dataset, which focuses on topics such as bias, transparency, and accountability.


One technique for assessing the fairness of machine learning models is through the use of fairness metrics, which are a variety of measures that can identify issues such as discrimination and disproportionate impacts, among other values. These metrics can be used to ensure that AI systems are fair and unbiased in their decision-making processes.


The following Python code demonstrates the use of the scikit-learn library to calculate fairness metrics:



This Python code calculates five different fairness metrics - statistical parity, demographic parity, equal opportunity, equalized odds, and predictive parity - using the mean absolute error and mean squared error functions from the scikit-learn library.


This is just one example of how fairness metrics can be calculated using Python, and there are many other libraries and techniques available for this purpose. It is crucial to carefully consider the most appropriate metric for a given context and to use a combination of different metrics to fully understand the fairness of a machine learning model.


Nonetheless AI has the potential to perpetuate and amplify biases present in the data it is trained on, leading to unfair or unjust outcomes. It is important to ensure that AI systems are trained on diverse and representative data, and that they are tested and evaluated for bias. To achieve this, researchers can utilize tools such as What-if, AI Fairness 360, Local interpretable Model-Agnostic Explanations (LIME), and FairML, as well as datasets such as the AI Fairness 360 (AIF360) dataset, the Adversarial Examples for Evaluating Robustness (AE-Robust) dataset, and the Ethical AI Practices (EAP) dataset. Additionally, fairness metrics, such as statistical parity, demographic parity, equal opportunity, equalized odds, and predictive parity, can be calculated using libraries like scikit-learn. It is crucial to carefully consider the appropriate metric to use for a given context and to use a combination of different metrics to fully understand the fairness of a machine learning model. Organizations like the Center for Human-Compatible Artificial Intelligence (CHAI) at the University of California, Berkeley, focus on the ethical and societal implications of AI and on developing AI technologies that align with shared human values.


Human’s mind, emotion and consciousness, the final unbreakable bulwark


If we were to supply ChatGPT with all literature and human-readable materials, could it reach the level of possessing comparable consciousness to that of a human's mind?


Perhaps not?


In the book “What Computers Can't Do,” Hubert Dreyfus argued that artificial intelligence (AI) is limited by its inability to capture the full range of human experiences and emotions and that it will never be able to fully replicate the intuitive, creative, and situational awareness characteristic of human intelligence. Dreyfus has also argued that AI is limited by its reliance on symbolic representations of the world, which differ from the way humans perceive and understand the world. These arguments have been influenced by the philosophy of Heidegger, who proposed the concept of “technological enframing,” or the way in which technology shapes and determines our understanding of the world, and by the field of phenomenology, which focuses on the nature of human consciousness and experience. Dreyfus has argued that AI lacks the first-person, subjective perspective necessary to fully understand and engage with the world in the way that humans do, and therefore cannot fully replicate the richness and complexity of human experience.


Or perhaps?


Neuromorphic computing systems, which are designed to more closely mimic the way the brain processes and stores information, involve the use of artificial neural networks. These systems are inspired by the structure and function of the brain's neural networks and are able to process and store information in a more distributed and parallel fashion, leading to more efficient and adaptive computing. The development of more advanced and sophisticated AI systems through neuromorphic computing has the potential to enable AI systems that are more flexible, adaptable, and capable of learning from experience. It may also lead to the development of AI systems that are more energy-efficient, particularly useful for applications in areas like robotics and autonomous systems. There has been some research suggesting that the brain may function in ways similar to quantum computing, in that quantum phenomena such as superposition and entanglement may play a role in brain function and information processing. However, it is important to note that the extent to which these phenomena are involved in brain function is still a matter of scientific debate, and it is not yet clear how they may be relevant to the way the brain processes and stores information. Some researchers have proposed that quantum phenomena may be involved in certain brain functions, such as the way neurons process and transmit information or the way the brain represents and stores information. Other researchers have challenged these claims, arguing that classical models of brain function are sufficient to explain the observed phenomena.


The many-worlds interpretation (MWI) of quantum mechanics is a theoretical framework that posits the universe constantly branches into multiple versions of itself, each corresponding to a different possible outcome of a quantum event. In this view, each version of the universe exists in its own separate dimension, disconnected from the others. While there is currently no scientific evidence to support the idea that the brain is able to link to other dimensions in this way, some researchers have explored the possibility that quantum phenomena may play a role in brain function. However, there is no evidence to suggest that the brain can access or interact with other dimensions or alternate versions of reality in the way described by the MWI. It is worth noting that the MWI is a purely theoretical interpretation of quantum mechanics, and it is not universally accepted by the scientific community. Thus MWI and the concept of Turiya (तुरीय; or the fourth) in Hinduism may both involve the idea of multiple dimensions or alternate versions of reality, but there is no evidence to prove or disprove a direct related. While it is an interesting and thought-provoking idea, it is not yet clear whether it is a scientifically viable explanation for the observed phenomena of quantum mechanics.


Final words


In his critique of his own “Tractatus,” Ludwig Wittgenstein argued that the work attempted to provide a general theory of language and meaning, rather than focusing on the specific ways in which language is used in different contexts. Wittgenstein believed that language is too complex and diverse to be captured by a single, general theory of meaning, and that it is only through close examination of specific uses of language that we can hope to understand it.


In the field of cognitive science, thought is often studied as a mental process involving the manipulation and organization of concepts and ideas, which is closely related to language. Tegmark has suggested that it is possible for an artificial intelligence (AI) system to have consciousness similar to or different from human consciousness, depending on the complexity and sophistication of its information processing abilities. While technologies such as neuromorphic computing and quantum computing have the potential to enable the development of more advanced AI systems by allowing for more efficient and adaptive information processing. However, the extent to which an AI system might have consciousness similar to or different from human consciousness will depend on various factors beyond the type of computing technology used, such as the system's design and programming.


If AI systems are able to generate their own consciousness, it will be important to ensure that their goals align with those of humans.


0 comments

Comments


bottom of page