Could Superintelligence Be One of the Most Closely Guarded Secrets of the United States?


"The survival of man depends on the early construction of an ultra-intelligence machine." (Good, IJ., 1965)

"As soon as the work takes a sensitive turn it will be turned over to a private contractor or a Federal laboratory." (Sanger, DE., 1985)


Overview

We explore in this article the claim that Artificial General Intelligence (AGI) is key to superintelligence, one of the United States’ most prized secrets according to Leoport Aschenbrenner, ex-member of the Superalignment team at OpenAI and founder of an investment firm focused on AGI. He believes that artificial superintelligence will be achievable and has authored a paper titled SITUATIONAL AWARENESS: The Decade Ahead. It discusses the pivotal challenges and considerations in its development and its evolution towards superintelligence. The column cites views from prominent leaders and researchers, emphasizing AGI's role as a key technological breakthrough with far-reaching effects on national security and global power balances. 

The article offers crucial definitions and argues that AGI is viewed as a top-secret priority by nations such as the United States, China, and Russia, owing to its strategic military and economic benefits. It also demonstrates the growing incorporation of AI into military applications. Statements from global leaders like Vladimir Putin, Xi Jinping, and Joe Biden highlight the geopolitical importance of AGI. 

The document further investigates the enduring relevance of secrecy in military research, especially concerning advanced technologies. It outlines strategies for preserving secrecy, including algorithmic confidentiality, compartmentalization, and legal frameworks, to safeguard sensitive information about AGI development. 

In conclusion, the article reaffirms the vital role of AGI as a highly classified national asset with substantial implications for military strategy, national security, and global power hierarchies. It underscores the continuous efforts and investments by major global powers to achieve AGI breakthroughs, acknowledging its potential to transform the future geopolitical landscape. 

First, Some Definitions 

Superintelligence stands for a theoretical future stage of AI development that surpasses human intelligence and raises complex ethical, safety, and societal implications (Baum, 2018; Chong et al., 2022).  

Artificial General Intelligence (AGI) is a branch of theoretical artificial intelligence (AI) research working to develop AI with a human level of cognitive function, including the ability to self-teach. 

Doomsday Scenarios in the context of AI refer to catastrophic events or outcomes that are often portrayed in media, policy discussions, and science fiction as potential consequences of advancements in artificial intelligence technology. These scenarios have been popularized by influential figures such as Stephen Hawking, Elon Musk, and Bill Gates, who have expressed concerns about the risks associated with AI development (Dwivedi, Y. K., 2021; Helfrich, G. (2024). 

Is it Possible that Aschenbrenner's Observation is Accurate? 

Based on the following observations, it is highly probable that this is indeed the case: 

A first clue for decoding the statement about the most prized secret comes from Mustafa Suleyman, co-founder of DeepMind: “The emergence of ChatGPT and other large language models (LLMs) is a precursor to a bigger and far more consequential “coming wave” of AI and synthetic biology.” “This wave is destined to “usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen.” Furthermore, despite working at the forefront of AI, he has been surprised by AI's recent advancements. He expresses concern about the risks posed by these innovative technologies—including their ability to enable cyberattacks, engineered pandemics, and automated warfare—while warning against the pitfalls of trying to ban them or control them through authoritarian means (Suleyman, M., 2023). 

Another hint is found in the statement of the Machine Intelligence Research Institute (MIRI) CEO Malo Bourgon. He participated in the U.S. Senate’s eighth bipartisan AI Insight Forum, which centered on the theme of “Risk, Alignment, & Guarding Against Doomsday Scenarios.” During the forum, he outlined the following points: 

  • It is likely that developers will soon be able to build AI systems that surpass human performance at most cognitive tasks.  
  • If we develop smarter-than-human AI with anything like our current technical understanding, a loss-of-control scenario will result.  
  • There are steps the U.S. can take today to sharply mitigate these risks. 

Even more, Ilya Sutskever, who was co-founder and chief scientist at OpenAI/ChatGPT, the research organization trying to build artificial general intelligence that could equal and perhaps surpass humans on many tasks, is now developing a venture called Safe Superintelligence Inc. Sutskever says, as reported in Bloomberg, that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it’s aiming for something far more powerful. With current systems, he says, “you talk to it, you have a conversation, and you’re done.” The system he wants to pursue would be more general-purpose and expansive in its abilities. “You’re talking about a giant super data center that’s autonomously developing technology. That’s crazy, right? It’s the safety of that we want to contribute to.” 

Suleyman, Bourgon, and Sutskever's claims underscore a decisive moment in the advancement of AI. The potential for substantial progress and benefits, as well as serious risks, has prompted urgent discussions on progressing with advanced AI research and implementation responsibly and safely. This is due to the potential risks the technology poses for geopolitical control and even the existence of nations as we know them today. 

Given these factors, it is highly likely that the efforts of the United States, China, and Russia in developing artificial general intelligence (AGI), including the underlying algorithms and knowledge, are among their most closely guarded national secrets. This is because of the significant strategic importance and potential risks associated with this technology. Historical trends in military operations suggest that the most advanced knowledge is embedded within research facilities owned by private companies and the government, similar to what occurred during the Cold War with technologies related to the nuclear arms race (Brodie, JF., 2011). 

A quote from Dr. Keyworth about the role of classified research in universities for the Reagan Administration's project to develop a shield against nuclear missiles (“Star War”), is revealing: ''In any case, the components that we look to universities to provide are new ideas, new concepts, fundamental research.'' As soon as the work takes a sensitive turn it will be turned over to a private contractor or a Federal laboratory (Sanger, DE.,1985).  

Also, Paul M. Nakasone, a retired general of the US Army and a former director of the National Security Agency (NSA), has been appointed to the board of directors of OpenAI. In addition to his role on the board, Nakasone will also be a key member of OpenAI’s recently formed Safety and Security Committee. This governance body will oversee the safety and security of OpenAI’s AI models, including the development of a successor to GPT-4, laying open the possibility that there are some classified activities for national defense, particularly cybersecurity. (See: Edward Snowden warns ‘do not trust OpenAI and ChatGPT’ after former NSA director appointed to its board. Also, OpenAI, as of January 10, 2024, quietly removed language from its usage policies that expressly prohibited the use of its technology, including ChatGPT, for military purposes. “A spokesperson for OpenAI told BI that there are national security use cases that align with the company's mission, which is in part what led to the changes.”). 

Digging Deeper 

If superintelligence is one of the most highly valued secrets of the United States, then why is it so? 

We found no direct evidence of the claim, but circumstantial evidence from the literature and quotes from world leaders such as Vladimir Putin, Xi Jinping, and Joe Biden could provide strong support. 

Of course, if it is one of the most highly classified secrets, it should be a “Black Budget Project.” This type of project refers to a classified program that receives covert funding and is used outside the traditional defense budget. They are often highly secretive, with limited or no oversight from the public or Congress, and are used for intelligence gathering, special operations, or other classified purposes. These projects are essential for keeping a country's military capabilities and technological superiority, especially in areas such as advanced weaponry, surveillance, and cyber warfare, where secrecy and confidentiality are paramount (Zatsepin, 2007; Swab, Andrew J., 2019). 

Circumstantial Evidence 

  1. The military is actively incorporating artificial intelligence (AI) into various applications, from tactical battlefield operations to strategic decision-making. AI is used in weapons systems selection, decision support, threat analysis, interpretation of intelligence, planning, intelligence analysis, surveillance, autonomous weaponry, reconnaissance, and planning (Johnson, 2019; Gaire, U. S. 2023). 
  2. The strategic intelligence derived from AI technologies can create knowledge asymmetries that provide advantages to states possessing such information. This underscores the importance of staying at the forefront of AI development to maintain a strategic edge (Chang, 2022). 
  3. The strategic advantage of AGI for the United States extends to global implications as well. The U.S. perceives China's advancements in AI-enabled military technology as a threat to its first-mover advantage, prompting strategic responses to keep its technological edge (Johnson, 2019). 
  4. Artificial General Intelligence (AGI) has the potential to provide the United States with a significant strategic advantage in various domains. For instance, The U.S. government has actively supported AI development, as shown by the National Artificial Intelligence Research and Development Strategic Plan introduced in 2016 (Parker, 2018). This strategic plan outlines a roadmap for federally funded research and development in AI, highlighting a clear commitment to advancing AI technologies within the country. The U.S. perspective on AI is linked to governmental policies aimed at fostering technological advancements in this field (Reis et al., 2019). 
  5. “Whoever reaches a breakthrough in developing artificial intelligence will come to dominate the world. The development of AI raises colossal opportunities and threats that are difficult to predict now.” (Russian President Vladimir Putin). 
  6. “Accelerating the development of a new generation of AI is an important strategic handhold for China to gain the initiative in global science and technology competition.” (China President Xi Jinping). 
  7. “One thing is clear: To realize the promise of AI and avoid the risks, we need to govern this technology -- and there's no other way around it, in my view. It must be governed,” (President of the United States, Joe Biden).  
  8. Daniel Olsher, President at Integral Mind Technologies, and has an affiliation with Carnegie Mellon University in Pittsburgh, PA, USA, and Temasek Laboratories at the National University of Singapore, has authored a paper titled “Proof of Achievement of the First Artificial General Intelligence (AGI)" In this paper, he claims to have created the first-ever AGI and superintelligence for the U.S. Government. The paper provides detailed explanations of how and why the system works and presents the first-ever definitive test for AGI: the Olsher Test2 (Olsher D., 2024). 

Based on the quotes and statements above, we can draw some key insights about the strategic, national security, and geopolitical importance of AGI. For instance, the development of AGI and superintelligence is viewed as essential for national security, as it could offer unparalleled decision-making capabilities and strategic insights. These indications suggest that AGI may be a highly sought-after asset not only for the United States but also for several nations, including China and Russia, to achieve superintelligence. 

Consider the following:

  • Military Significance: AGI is considered a critical part of modern military operations, affecting everything from tactical decisions to strategic planning. Its applications range from weapons choice to autonomous systems, highlighting its potential to revolutionize warfare. 
  • Knowledge Asymmetry: The development of AGI can create significant information and capability gaps between nations. Those with advanced AI capabilities may gain substantial strategic advantages in intelligence gathering, decision-making, and operational effectiveness. 
  • Global Power Dynamics: Major world powers, including the United States, China, and Russia, view AGI as a key factor in determining future global leadership. The race to develop AGI is considered crucial for keeping or gaining geopolitical dominance. 
  • Economic and Technological Competition: AGI development is not just a military concern but also a matter of economic and technological competitiveness. Nations are investing heavily in AI research and development to secure a leading position in the global economy. 
  • National Security Priority: Governments are treating AGI development as a matter of national security, implementing strategic plans and policies to accelerate progress in this field. 
  • Potential for Disruption: There's a recognition that AGI could be a highly disruptive technology, with the potential to dramatically alter the balance of power on the global stage. 
  • Governance Concerns: As shown by President Biden's statement, there's a growing awareness of the need for international governance and regulation of AI technologies to mitigate risks and ensure responsible development. 
  • First-Mover Advantage: Countries are racing to be the first to achieve significant breakthroughs in AGI, recognizing the substantial advantages that could come with being the pioneer in this field. 
  • Unpredictability: As suggested by Putin's quote, the full implications of AGI are difficult to predict, adding an element of uncertainty to strategic planning around this technology. 
  • Defensive Posturing: Nations are not only pursuing AGI for offensive capabilities but also as a defensive measure to counter potential threats from other countries' AI advancements. 

From the above points, it is evident that Artificial General Intelligence (AGI) is a groundbreaking technology with extensive implications for national security, global power structures, and economic dominance. Major world powers consider its development a strategic imperative due to its potential to reshape the geopolitical landscape in the coming decades. Therefore, it is highly likely that the most advanced labs, both privately owned and government-owned, are working covertly with governments to advance this technology and keep it as one of their most closely guarded secrets as a road to superintelligence. 

Some Concrete Examples from the Military Standpoint 

  • The article by Ivanka Barzashka from the Bulletin of the Atomic Scientists on December 4, 2023, presents that AI systems could sift through data to identify competitive advantages, generate new adversary strategies, and evaluate the conditions under which wars can be won or lost. This can be achieved via the fusion of AI with wargames—defined by NATO as “representations of conflict or competition in a safe-to-fail environment, in which people make decisions and respond to the consequences of those decisions.”   
  • China creates the world’s first AI commander, to command large-scale computer war games to train for war. The “virtual commander” was developed at the Joint Operations College of the National Defense University in Shijiazhuang. This AI is designed to resemble human behavior closely, mimicking human experience, thought processes, personality traits, and even human flaws. In large-scale computer war games involving all branches of the People’s Liberation Army (PLA), the AI commander assumes the role of a supreme commander, making major decisions and adapting swiftly to evolving scenarios (The Daily Guardian). 
  • The U.S. Department of Defense (DOD) released its strategy to accelerate the adoption of advanced artificial intelligence capabilities to ensure U.S. warfighters keep decision superiority on the battlefield for years to come. European countries are also involved.  
  • The next big move may be to make military simulations smarter, specifically by adding advanced AI so that the simulated adversaries can offer better challenges. (Nextgov) 
  • Companies and government agencies like the United States’ Defense Advanced Research Projects Agency (DARPA) and the United Kingdom’s Defence Science and Technology Laboratory are spearheading experimental projects on AI-wargaming integration. Notably, the RAND Corporation has toyed with such fusion since the 1980s (Barzashka, I., 2023). 
  • General Mark Milley, ex-chairman of the US Joint Chiefs of Staff, claims that AI will play a crucial role in assessing all sensor capabilities that the military has. Furthermore, AI could integrate that information to provide commanders with info relevant to the state of their troops and the landscape of adversaries. 
  • Russian President Vladimir Putin has said that artificial intelligence “should ensure a breakthrough in improving combat capabilities of weapons” and stressed the need to prioritize AI-enabled systems in the State Armament Program through 2033 (Russia Studies Program. (2021). AI and Autonomy in Russia (Issue 26). CNA.) 
  • Russia’s approach to military AI prioritizes technologies and capabilities that can be used to debilitate the adversary’s command, control, and communications systems (ibid). 
  • Colonel-General Vladimir Zarudnitsky, the head of the Military Academy of the Russian Armed Forces General Staff, wrote that the development and use of unmanned and autonomous military systems, the “robotization” of all spheres of armed conflict, and the development of AI for robotics will have the greatest medium-term effect on the Russian armed forces’ ability to meet their future challenges (Bennett, S., 2022). 

Based on the examples above, it is evident that Artificial Intelligence (AI) is increasingly being used in military applications, particularly for wargaming simulations. This integration enables AI systems to analyze data, identify strategic advantages, assess adversary strategies, and evaluate conditions for victory or defeat. China alleges, has developed the world's first AI commander, capable of making high-level decisions in complex war games. The U.S., European countries, and Russia are also actively investing in AI for military purposes to enhance decision-making, intelligence analysis, and autonomous weapon systems. Organizations like DARPA and the RAND Corporation are at the “forefront” of AI-wargaming integration, aiming to create smarter simulations with advanced AI opponents that offer realistic challenges and improve military preparedness.  

However, ethical concerns about AI's role in warfare are still a major issue. It is? Could the priority to develop more capabilities that the military perceives as advantageous for geopolitical supremacy be the reason for secrecy, independent of ethical considerations? 

The Need for Secrecy 

Military organizations, no matter the country, have strong motivations to keep secrecy around innovative technologies and strategic developments. One key reason for this is the crucial need to protect national security and stay ahead of potential adversaries (Resnik, 2006). By keeping their latest innovations secret, militaries can effectively prevent sensitive information from falling into the wrong hands, thus safeguarding their strategic advantage and ensuring successful operations (McLauchlan & Hooks, 1995). This emphasis on confidentiality becomes particularly vital in the context of the arms race during the Cold War, where the imperative of secrecy was bolstered by the science and technology-intensive nature of military progress (ibid). The persistent threat of information leaks or espionage further highlights the critical role of secrecy in preserving military superiority (ibid). 

The emphasis on secrecy in military research is linked to the pursuit of tactical and strategic advantages. By keeping information about innovative technologies and strategies confidential, military organizations can surprise and outmaneuver their adversaries during conflicts, thereby increasing their likelihood of success on the battlefield (Resnik, 2006). This element of surprise is a crucial part of military strategy and is made possible by the confidential nature of military innovations (Resnik, 2006). 

The historical context of military innovation has played a significant role in shaping the importance of secrecy within military organizations. During the Cold War and the post-World War II era, military technologies were mainly developed and protected through secrecy rather than patents. This historical precedent has influenced contemporary military practices, where the protection of intellectual property through secrecy is still a priority for military research and development efforts. The legacy of past conflicts and the strategic imperatives that emerged from them continue to shape the culture of secrecy within military establishments (Hunt & Gauthier‐Loiselle, 2010). 

In addition to national security concerns and strategic advantages, the role of secrecy in military research is also tied to the nature of defense acquisition decisions and oversight mechanisms. Top defense elites often make acquisition decisions that involve a high degree of secrecy and limited oversight. This lack of transparency in decision-making processes underscores the broader culture of secrecy that permeates military organizations, where confidentiality is prioritized to keep operational security and protect sensitive information (Choulis et al., 2022). 

Moreover, the evolution of military technology and its increasing commercialization have also influenced the dynamics of secrecy within military research and development. As military technologies are now being developed by commercial entities, the balance between secrecy and patenting has shifted. While secrecy remains important for safeguarding sensitive military innovations, the rise of commercial applications for aerospace technologies has led to a decrease in the significance of secrecy in certain sectors. This shift highlights the complex interplay between commercial interests, national security imperatives, and the protection of intellectual property in the military domain (Schmid et al., 2017). 

How could the military achieve secrecy in this scenario, considering the dual-use nature of some technologies? 

  • Algorithmic Secrecy: While the physical infrastructure used to develop AGI might have civilian applications, the algorithms themselves are unique and highly sensitive. By focusing on protecting the secrecy of these algorithms, the military can maintain a strategic advantage. 
  • Compartmentalization: Limiting access to sensitive information to a select group of individuals within research and development teams can help maintain secrecy. This approach ensures that only those directly involved in critical aspects of AGI development have knowledge of the most sensitive details. 
  • Information Control Measures: Implementing strict security protocols, robust cybersecurity measures, and stringent data-sharing restrictions can prevent unauthorized access to and dissemination of sensitive information related to AGI research. 
  • Collaboration with Trusted Entities: The military can collaborate with trusted partners in academia and industry while still maintaining control over the most sensitive aspects of AGI development. This allows for the exchange of knowledge and expertise while safeguarding critical information. 
  • Misdirection and Deception: The military could potentially use misdirection and deception tactics to mask their true progress in AGI development. This could involve releasing misleading information or focusing public attention on less sensitive aspects of AI research. 
  • Legal and Regulatory Frameworks: Implementing legal and regulatory frameworks that protect intellectual property and restrict the export of sensitive technologies can help maintain secrecy. 

To finish, let's comment on the following two papers and the book “The Pentagon's Brain”   

  1. Secrets or Shields to Share? New Dilemmas for Dual Use Technology Development and the Quest for Military and Commercial Advantage in the Digital Age by Jay Sowsky.  
  2. Leopold Aschenbrenner, Situational Awareness: The Decade Ahead, shown at the beginning of our article and the main purpose of it. 

The Stowsky article focuses on the challenges of maintaining secrecy in dual-use technologies, which have both civilian and military applications. Stowsky argues that in the digital age, the traditional approach of shielding information may not be effective and that openness and collaboration may be necessary to maintain technological superiority. However, this openness also poses risks to national security. It could imply that in the most advanced cases, the secrecy of the technologies is a critical element for national security. 

In contrast, the Aschenbrenner article focuses on the potential secrecy of the algorithms that would create AGI, not the technology itself. Aschenbrenner argues that while the physical infrastructure required to develop AGI (such as computer chips and the facilities to house them) could be considered dual-use technology, the algorithms are not. These algorithms, he suggests, are the true “secret sauce” in the race to develop AGI, and their potential leakage to foreign adversaries is a significant concern.  

The earlier statement, along with previous remarks by Vladimir Putin, Xi Jinping, Joe Biden, and military leaders, highlights the significant interest in AI for national security. This interest is further underscored by OpenAI's decision to remove explicit prohibitions against using its technology, including ChatGPT, for military purposes. We noted that a spokesperson for OpenAI confirmed to Business Insider that this change was partly due to national security use cases that align with the company's mission. These factors strongly support Aschenbrenner's claim that “Artificial General Intelligence (AGI) is key to superintelligence, one of the United States’ most prized secrets,” a claim that likely holds for other countries as well. 

The following paragraph is from the book Jacobsen, Annie (2015-09-14T23:58:59.000). The Pentagon's Brain. Little, Brown, and Company. Kindle Edition. 

“Colonel Goldstine was cleared for a top secret Army program that involved exactly the kind of machine von Neumann was theorizing about. Goldstine arranged to have von Neumann granted clearance, and the two men set off for the University of Pennsylvania. There, inside a locked room at the Moore School, engineers were working on a classified Army-funded computing machine—the first of its kind. It was called the Electronic Numerical Integrator and Computer, or ENIAC. 

The above happens apparently before von Newman was aware of such a project. It seems to me that with many technological advances, there is a secret military project in development. Could it be the same with AI?

The paragraph from Annie Jacobsen's book “The Pentagon's Brain” highlights how early computing technologies, such as the ENIAC, were developed in secrecy under military funding. This historical context underscores the pattern of significant technological advancements often being driven by classified military projects. The involvement of key figures like Colonel Goldstine and John von Neumann in these secretive endeavors points to the strategic importance of keeping groundbreaking technologies under wraps until their potential can be fully realized and secured.

Given this historical example, it is reasonable to suggest that a similar approach could be applied to the development of Artificial Intelligence (AI), particularly Artificial General Intelligence (AGI). The strategic implications of AGI are immense, spanning national security, economic competitiveness, and global power dynamics. As such, it is plausible that governments and military organizations might be investing in and developing AI technologies in secret, much like they did with early computing machines.

In this article, we discussed earlier circumstantial evidence of this hypothesis by highlighting the significant interest and investment by major world powers in AGI. We emphasize the strategic advantages that AGI could provide in military applications, intelligence analysis, and decision-making processes. The potential for AGI to revolutionize various domains underscores the likelihood that its development is being pursued covertly to maintain a strategic edge and protect national security interests.

In conclusion, the pattern of secret military projects driving technological advancements, as seen with the ENIAC,  and the other examples provided suggests that AI development, particularly AGI, could be following a similar path. The strategic importance and potential risks associated with AGI make it a prime candidate for classified research and development efforts.

References

Barzashka, I. (2023, December 4). Wargames and A.I.: A dangerous mix that needs ethical oversight.

Baum, S. D. (2018). Countering superintelligence misinformation. Information, 9(10), 244. https://doi.org/10.3390/info9100244

Bendett, S. (2022). Russia’s Artificial Intelligence Boom May Not Survive the War. Retrieved from [Russia’s Artificial Intelligence Boom May Not Survive the War | Center for a New American Security (en-US) (cnas.org)

Brodie JF. Learning Secrecy in the Early Cold War: The RAND Corporation. Diplomatic History. 2011;35(4):643-670. doi:10.1111/j.1467-7709.2011.00971.x

Chang, K. (2022). ‘i know something you don’t know’: the asymmetry of ‘strategic intelligence’ and the great perils of asymmetric alliances. The British Journal of Politics and International Relations, 25(3), 480-497. https://doi.org/10.1177/13691481221109727

Chong, C. B., Razak, M. A., Biak, D. R. A., Tohir, M. Z. M., & Syafiie, S. (2022). A review on supervised machine learning for accident risk analysis: challenges in Malaysia. Process Safety Progress, 41(S1). https://doi.org/10.1002/prs.12346

Choulis, I., Mehrl, M., Escribà‐Folch, A., & Böhmelt, T. (2022). How mechanization shapes coups. Comparative Political Studies, 56(2), 267-296. https://doi.org/10.1177/00104140221100194

Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., … & Williams, M. D. (2021). Artificial intelligence (ai): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002

Gaire, U. S. (2023). Application of artificial intelligence in the military: an overview. Unity Journal, 4(01), 161-174. https://doi.org/10.3126/unityj.v4i01.52237

Good IJ. Speculations Concerning the First Ultraintelligent Machine. In: Advances in Computers. Vol 6. Academic Press Inc.; 1965.

Helfrich, G. (2024). The harms of terminology: why we should reject so-called “frontier ai”. AI and Ethics. https://doi.org/10.1007/s43681-024-00438-1

Hunt, J. and Gauthier‐Loiselle, M. (2010). How much does immigration boost innovation? American Economic Journal: Macroeconomics, 2(2), 31-56. https://doi.org/10.1257/mac.2.2.31

Johnson, J. (2019). Artificial intelligence & future warfare: implications for international security. Defense &Amp; Security Analysis, 35(2), 147-169. https://doi.org/10.1080/14751798.2019.1600800

Johnson, J. (2019). The end of military-techno pax americana? Washington's strategic responses to Chinese ai-enabled military technology. The Pacific Review, 34(3), 351-378. https://doi.org/10.1080/09512748.2019.1676299

McLauchlan, G. and Hooks, G. (1995). Last of the dinosaurs? big weapons, big science, and the American state from Hiroshima to the end of the Cold War. The Sociological Quarterly, 36(4), 749-776. https://doi.org/10.1111/j.1533-8525.1995.tb00463.x

Olsher Daniel. Proof of Achievement of the First Artificial General Intelligence (AGI). 2024. hal- 04397466v1

Parker, L. E. (2018). Creation of the national artificial intelligence research and development strategic plan. AI Magazine, 39(2), 25-32. https://doi.org/10.1609/aimag.v39i2.2803

Reis, J., Santo, P. E., & Melão, N. (2019). Artificial intelligence in government services: a systematic literature review. Advances in Intelligent Systems and Computing, 241-252. https://doi.org/10.1007/978-3-030-16181-1_23

Resnik, D. B. (2006). Openness versus secrecy in scientific research. Episteme, 2(3), 135-147. https://doi.org/10.3366/epi.2005.2.3.135

Sanger DE. Campuses’ Role in Arms Debated as ‘Star Wars’ Funds Are Sought. The New York Times. July 22, 1985 (https://www.nytimes.com/1985/07/22/us/campuses-role-in-arms-debated-as-star-wars-funds-are-sought.html?searchResultPosition=16)

Schmid, J., Brummer, M., & Taylor, M. Z. (2017). Innovation and alliances. Review of Policy Research, 34(5), 588-616. https://doi.org/10.1111/ropr.12244

Stowsky J. Secrets to Shield or Share? New Dilemmas for Dual Use Technology Development and the Quest for Military and Commercial Advantage in the Digital Age. BRIE Working Paper 151. 2003 (https://www.academia.edu/70621156/Secrets_to_Shield_or_Share_)

Suleyman M, Bhaskar M. The Coming Wave. Crown; 2023.

Swab, Andrew J. "Black Budgets: The U.S. Government's Secret Military and Intelligence Expenditures." Briefing Papers on Federal Budget Policy, no. 72, Harvard Law School, May 2019.

Zatsepin, V. (2007). Russian military expenditure: what's behind the curtain? The Economics of Peace and Security Journal, 2(1). https://doi.org/10.15355/2.1.51




Final Remarks

A group of friends from “Organizational DNA Labs” (a private group) compiled references and notes from various of our thesis, authors, and academics for the article and analysis. We also utilize AI platforms such as Claude, Gemini, Copilot, Open-Source ChatGPT, and Grammarly as a research assistant to conserve time and to check for the structural logical coherence of expressions. The reason for using various platforms is to verify information from multiple sources and validate it through academic databases and equity firm analysts with whom I have collaborated. The references and notes in this work provide a comprehensive list of the sources utilized. I, as the editor, have taken great care to ensure all sources are appropriately cited, and the authors are duly acknowledged for their contributions. The content is based primarily on our analysis and synthesis of the sources. The compilation, summaries, and inferences are the product of using both our time with the motivation to expand my knowledge and share it. While we have drawn from quality sources to inform our perspective, the conclusion reflects our views and understanding of the topics covered as they continue to develop through constant learning and review of the literature in this business field.









 

Comentarios

Entradas populares