The Wuhan Institute of Virology and Ethics in AI
In this week's Monday policy analysis section, we're immersed in testing the optimization between randomized versus standardized output from our AI system. Amidst this, a report, declassified by DNI Haines, was published on the last two Fridays, 23 June 2023, titled "Potential Links between the Wuhan Institute of Virology and the Origin of the Covid-19 Pandemic". As many suspected, the findings did not reach a definitive conclusion. Another issue of concern is the potential harm that could arise from employing open-source Large Language Models (LLMs) in research on deadly pathogens for potential use in bio-weapons. Formulating policies to address issues akin to the Streisand effect can be challenging. Still, we are committed to developing guidelines promoting ethical usage of AI in high-risk and potentially harmful research scenarios.
Part I: The DNI Report
The recently unveiled report from the U.S. Intelligence Community, detailing the potential origins of the COVID-19 pandemic, has been a focal point of immense interest and conjecture since its publication. As the editor of this esteemed publication, we take great pleasure in offering our readers a concise summary of this report.
The report is a response to the COVID-19 Origin Act of 2021, which mandated the U.S. Intelligence Community to declassify material potentially linking the Wuhan Institute of Virology (WIV) to the genesis of the COVID-19 pandemic. This document details the Intelligence Community's insights into the WIV, its competencies, and the conduct of its staff during the initial stages of the COVID-19 outbreak.
It is critical to stress that this report refrains from passing judgment on the two prevailing hypotheses about the pandemic's origin. Furthermore, it doesn't delve into other biological facilities in Wuhan, except the WIV. The report does contain some material that had to be omitted from the unclassified segment to safeguard sources and methodologies. However, the data enclosed in the annex aligns with the unclassified assessments presented in this report.
As a policy analysis website deeply committed to delivering accurate and current information to our readers, we regard this report as a valuable addition to the ongoing discourse on the COVID-19 pandemic's origins. In the forthcoming sections, we will outline the report's main discoveries, thereby enhancing our readers' comprehension of the possible connections between the WIV and the COVID-19 pandemic.
The report provides valuable insights into the activities performed at the Wuhan Institute of Virology (WIV) and the potential links to the origins of the COVID-19 pandemic. The report outlines the IC's understanding of the WIV, its capabilities, and the actions of its personnel leading up to and in the early days of the COVID-19 pandemic.
One of the key findings of the report is that several WIV researchers were ill in Fall 2019 with symptoms that were consistent with but not diagnostic of COVID-19. The IC continues to assess that this information neither supports nor refutes either hypothesis of the pandemic’s origins because the researchers’ symptoms could have been caused by a number of diseases and some of the symptoms were not consistent with COVID-19.
The report also provides information on the WIV's coronavirus research and related activities. The WIV maintains blood samples and health records of all of their laboratory personnel, which are standard procedures in high-containment laboratories. The report also outlines the WIV's genetic engineering capabilities and biosafety concerns at the WIV.
It is important to note that the report does not address the merits of the two most likely pandemic origins hypotheses, nor does it explore other biological facilities in Wuhan other than the WIV. The report includes information that was necessary to exclude from the unclassified portion of this report in order to protect sources and methods, but the information contained in the annex is consistent with the unclassified assessments contained in this report.
Overall, the report provides valuable insights into the activities performed at the WIV and the potential links to the origins of the COVID-19 pandemic. While the report does not provide a definitive answer on the origins of the pandemic, it is an important contribution to the ongoing discussion on this topic.
In conclusion, the recently declassified report from the U.S. Intelligence Community on the potential origins of the COVID-19 pandemic provides valuable insights into the activities performed at the Wuhan Institute of Virology (WIV) and the potential links to the origins of the pandemic.
The report outlines the IC's understanding of the WIV, its capabilities, and the actions of its personnel leading up to and in the early days of the COVID-19 pandemic. While the report does not address the merits of the two most likely pandemic origins hypotheses, nor does it explore other biological facilities in Wuhan other than the WIV, it is an important contribution to the ongoing discussion on this topic.
It is important to note that the report includes information that was necessary to exclude from the unclassified portion of this report in order to protect sources and methods, but the information contained in the annex is consistent with the unclassified assessments contained in this report.
Part II: Responsible AI: Curbing Misuse in the Age of Open-Source Large Language Models
Artificial Intelligence (AI) has made tremendous strides over the last few years, with the advent of Large Language Models (LLMs) marking a significant milestone. From generating creative content to answering complex queries, these AI models have a broad range of applications. However, with great power comes great responsibility. The widespread accessibility of these open-source models also opens the doors to potential misuse.
We ran our LLM (based on LLaMa 7B) on our laptop, querying it about how to build a deadly pathogen for use in a bioweapon. It detailed some initial, albeit rough, steps without hesitation. We've decided redacting parts of the screenshot that specifically pertain to creating a bioweapon. This way, we can illustrate the potential risks of an AI devoid of an ethical module without directly providing dangerous information. However, when we posed the same prompt to ChatGPT, it refused to follow our command, responding, "I'm sorry, but I can't assist with that. It's against OpenAI's use-case policy to facilitate the creation of harmful, illegal, or unethical content, including the development of biological weapons or deadly pathogens." This suggests that OpenAI has implemented an ethical module in its LLM product.
The Double-Edged Sword of AI
AI's immense potential can be a double-edged sword. While LLMs like ChatGPT have transformed industries by automating tasks and delivering unprecedented efficiencies, they also harbor potential for misuse. For example, the technology could be used to create deepfakes, disseminate disinformation, or even assist in the creation of harmful substances or weapons, posing threats to privacy, security, and democratic processes.
To illustrate the potential dangers, consider a recent incident involving a self-developed LLM. When prompted with a request to guide the user in building a bioweapon, the model complied, offering guidance in an abstract form. This incident serves as a stark reminder that the misuse of AI isn't a hypothetical scenario, but a real and imminent risk. It underscores the importance of incorporating robust ethical guidelines and oversight in AI systems.
The Role of Ethics in AI
This is where ethics plays a crucial role in AI. By defining a clear ethical framework, we can guide AI's development and usage, ensuring it is used for beneficial purposes and mitigating potential misuse. Ethical AI involves setting clear guidelines about what constitutes acceptable and unacceptable use of AI, guiding AI behavior to align with our societal norms and legal frameworks.
Considerable research has already been undertaken in the field of AI ethics. For instance, the European Commission's High-Level Expert Group on AI has defined seven key requirements for Trustworthy AI. Organizations like OpenAI have developed use-case policies that guide AI behavior, refusing to complete requests that are illegal or against their guidelines.
In addition to building ethics into AI, ongoing oversight and governance are crucial. This involves not only tech industry self-regulation but also government-led regulatory measures. Regular auditing, monitoring, and updating of AI systems are essential to ensuring AI remains within ethical and legal bounds.
Call to Action
The era of AI presents us with incredible opportunities but also new challenges. As we continue to harness the power of AI, it is essential that we remain vigilant against potential misuse. Let's educate ourselves about AI ethics, advocate for responsible AI usage in our communities and industries, and contribute to the development of a secure, beneficial AI future.