Research

female cyberpunk ripperdoc, in a workshop, clean pixel art, neon colors --ar 16:9

This is a repository of all the useful Generative AI research/surveys/in-depth articles that I’ve come across.


Metaculus - When will the first weakly general AI system be devised, tested, and publicly announced?

Ongoing

An aggregation of expert predictions on when General Artificial Intelligence (AGI) will come online.


Stanford University - AI Index Report 2023

August 2023

The report provides a comprehensive overview of the current state of artificial intelligence and includes a chapter on AI public opinion, technical performance, and original analysis about large language and multimodal models. There is also a detailed look at trends in global AI legislation records, a study of the environmental impact of AI systems, and more. The report highlights that AI has moved into its era of deployment, with new large-scale AI models being released every month. However, these models also present complicated ethical challenges. The report also notes that while private AI investment decreased for the first time in a decade, AI continues to be a topic of great interest to policymakers, industry leaders, researchers, and the public. The report emphasises the need for critical thinking about how we want AI to be developed and deployed, given its increased presence and potential for massive disruption.


fast.ai - AI Safety and the Age of Dislightenment

July 2023

The article discusses the potential risks and implications of stringent AI model licensing and surveillance. The author argues that such regulations may be counterproductive, leading to an unsustainable concentration of power and potentially rolling back societal gains from the Enlightenment era. The document emphasizes that AI models are general-purpose computing devices, and it's impossible to guarantee they can't be used for harmful applications. Therefore, the author suggests that regulations should focus on the usage of AI models rather than their development. The document also warns against the centralization of AI power in a few big companies, arguing that this could lead to less competition, higher prices, less innovation, and lower safety. The author advocates for openness, humility, broad consultation, and a careful approach to regulation to develop better responses aligned with our principles and values.


McKinsey - Generative AI and the Future of Work in America

July 2023

The report discusses the rapid evolution of the US labour market, the nature of work, and the impact of generative AI on job automation. The report highlights that during the pandemic (2019-2022), the US labour market saw 8.6 million occupational shifts, 50 percent more than in the previous three-year period. By 2030, activities accounting for up to 30 percent of hours currently worked across the US economy could be automated, a trend accelerated by generative AI. The report also discusses the impact of federal investment in climate and infrastructure, as well as structural shifts, on labour demand. It predicts an additional 12 million occupational transitions may be needed by 2030, with workers in lower-wage jobs up to 14 times more likely to need to change occupations than those in highest-wage positions. The report emphasizes the need for large-scale workforce development and more expansive hiring approaches from employers.


The Verge - AI is a Lot of Work

June 2023

The article discusses the often overlooked human labor that underpins the development and functioning of AI systems. The article highlights the role of annotators, who process and label the raw data used to train AI models. This work, often tedious and repetitive, is carried out by a vast, mostly invisible workforce. The author also discusses the opaque nature of this work, with workers often unaware of the larger context or purpose of their tasks. The article underscores the fact that behind even the most advanced AI systems, there are large numbers of people labeling data to train it and clarifying data when it gets confused. The document also highlights the potential for AI to change the nature of work, rather than simply replacing jobs. It suggests that as AI systems become more integrated into various sectors, the need for human annotators and similar roles will likely increase, rather than decrease.


Bloomberg - Humans are Biased. Generative AI is even worse

June 2023

Stable Diffusion, an AI model that generates images based on written prompts, has come under scrutiny for perpetuating and amplifying racial and gender stereotypes, according to an analysis of over 5,000 images it created. The model, used in a wide range of applications from advertising to political campaigns, reflects serious issues with biased data influencing AI outputs. The alarming implications of this bias were highlighted in an experiment where Stable Diffusion over-represented white males in high-paying jobs, while marginalizing women and individuals with dark skin. Despite the intent of London-based Stability AI, the distributor of Stable Diffusion, to improve bias evaluation techniques and develop more representative models, these biased outputs risk furthering societal stereotypes and could lead to unfair treatment. As AI-generated content grows, with some predicting that as much as 90% of internet content could be artificially generated within a few years, the potential for this bias to affect areas like policing, advertising, and representation is becoming an increasingly urgent issue to address.


The Verge - Hope, Fear & AI

June 2023

The research uses data from a poll involving over 2,000 US adults to understand their sentiments, usage, and expectations of AI. The survey revealed that despite widespread coverage, the use of AI tools is still limited and predominantly used by younger generations. People have high expectations of AI's impact on society, surpassing those for electric vehicles or NFTs. The most common use of AI tools was for creative experiments, with a clear indication that AI is expanding people's creative capacities. Ethical concerns exist around AI tools duplicating styles without original creators' consent. The survey results indicate strong support for regulations and standards in AI development and usage. While AI's societal impact draws mixed feelings of excitement and anxiety, there's a surprising openness to the emergence of sentient AI, with two-thirds having no issues with companies attempting to create one.


McKinsey - The Economic Potential of Generative AI

June 2023

The research explores the transformative potential of generative AI on the global economy. The report estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases analysed, increasing the impact of all artificial intelligence by 15 to 40 percent. This value is concentrated in four areas: customer operations, marketing and sales, software engineering, and R&D. The report also highlights the potential of generative AI to change the nature of work, automating activities that account for 60 to 70 percent of employees' time today. However, this acceleration in automation potential will require investments to support workers as they shift work activities or change jobs. The report emphasises that the era of generative AI is just beginning, and its full realisation will require addressing challenges such as managing inherent risks, determining new skills needed in the workforce, and rethinking core business processes.


Boston Consulting Group - AI at Work

June 2023

The survey collected responses from nearly 13,000 people across 18 countries, including executive suite leaders, middle managers, and frontline employees, to understand their thoughts, emotions, and fears about AI. The survey found that respondents are generally optimistic about how AI, particularly generative AI, will affect their work, believing it will save them time and promote innovation in their roles. However, this optimism varies significantly by seniority and country. The survey also revealed deep-seated concerns, including that companies are not taking adequate measures to ensure responsible use of AI and that more upskilling is needed to prepare for expected changes to work. Furthermore, the survey found that comfort level with generative AI tools plays an important role in boosting positive sentiments about AI at work throughout the organisation.


Gartner - Generative AI in Marketing

May 2023

The research collects responses from marketing leaders about their experiences and expectations regarding the use of generative AI in marketing. The survey found that 48% of marketing leaders report their organizations are already using generative AI in some part of their marketing funnel, with an additional 43% planning to use it. The most common applications are in content marketing, product marketing, and customer experience. The survey also found that 94% of marketing leaders believe generative AI will become a regular part of marketing team’s tech stacks within four years. The top benefits seen or expected from generative AI deployment are improved speed to market, improved productivity, and improved flexibility. However, the main barriers to adoption are skills gaps, unforeseen security threats, and integrations with existing technology.


The Economist - Large, creative AI models will transform lives and labour markets

April 2023

The article discusses the transformative potential and concerns of large, creative AI models like ChatGPT, capable of generating varied content, from songs to essays. The piece delves into the use of deep learning and neural networks for image recognition and language processing, outlining how ChatGPT uses tokenization, embedding, and an attention network to process language. Alongside exploring the model's training process, the article points out associated risks, like biases and unpredictability. It concludes by recognizing the rapid growth and untapped potential of AI models and emphasizing the necessity for regulations given the emerging risks.


Microsoft - 4 Ways Leaders Can Empower People for How Work Gets Done

January 2023

The study highlights that many workers perceive artificial intelligence (AI) and low-code/no-code tools as the future of work, with 91% of surveyed employees expressing interest in using these technologies. However, the report identifies several barriers to the successful implementation of these tools, including lack of access to AI-powered tools, insufficient training, resistance to change, privacy and security concerns, lack of strategic implementation, and inadequate support from leadership. The article underlines the need for companies to address these issues by providing necessary infrastructure, imparting effective training, and fostering a culture that embraces these transformative technologies, thereby paving the way for a more efficient and automated future of work.


Seqouia - Generative AI: A Creative New World

September 2022

The article discusses the rise of Generative AI - a class of large language models that can create human-like content such as writing, coding, and designs. The authors predict that this technology can revolutionize industries that involve human creativity by bringing the marginal cost of creation and knowledge work down towards zero. By improving the efficiency of billions of knowledge and creative workers, Generative AI could potentially generate trillions of dollars of economic value. Despite the exciting prospects, the authors note that the current models are still under development, and various business and technological issues, such as copyright and costs, need to be resolved.


The Royal Society - Portrayals and perceptions of AI and why they matter

December 2018

The report explores how artificial intelligence (AI) is portrayed and perceived, particularly in the English-speaking West, and the implications of these narratives. The report is based on discussions from four workshops held between May 2017 and May 2018. The report highlights that narratives, both fictional and non-fictional, are essential to the development of science and people's engagement with new knowledge and applications. However, these narratives can also create false expectations and perceptions that are hard to overturn. The report also notes that exaggerated expectations and fears about AI, along with an overemphasis on humanoid representations, can affect public confidence and perceptions. The report suggests that these limitations can be addressed by communicating uncertainty, learning from narratives about other disruptive technologies, widening the body of available narratives, and creating spaces for public dialogues. The report also provides a brief history of imagining intelligent machines, dating back to Homer's Iliad.


The Anatomy of an AI System

September 2018

This article draws an analogy between Amazon's Echo and Athanasius Kircher's 17th-century 'talking statue,' both being devices that eavesdrop on their surroundings, re-emphasizing the power dynamics inherent in such technology. It then moves on to critically examine the resource implications of building these systems, highlighting vast income disparities and labour exploitation in the production process. In particular, it draws attention to the dangerous and low-paid work of cobalt miners, contrasted with the wealth of tech industry leaders such as Jeff Bezos. The article also underscores the complex and opaque nature of contemporary supply chains. It cites Intel and Philips as examples of companies trying to ensure their supply chains are 'conflict-free,' but notes the immense difficulties involved in tracing minerals to their source and understanding the full supply chain, which includes thousands of suppliers and contractors across various countries.


Wait But Why - The AI Revolution: The Road to Superintelligence

January 2015

The article is an in-depth exploration of the concept of Artificial Intelligence (AI), its current state, and its potential future. The author emphasizes the importance of understanding AI, stating it as the most important topic for our future. The document discusses the concept of 'Die Progress Unit' (DPU), which is the amount of time it takes for a leap in progress so great that it completely alters life as we know it. The author argues that due to the Law of Accelerating Returns, these leaps are happening faster and faster, suggesting that the world 35 years from now might be unrecognisable. The document also explains the three calibres of AI: Artificial Narrow Intelligence (ANI), which specializes in one area; Artificial General Intelligence (AGI), which can perform any intellectual task that a human being can; and Artificial Superintelligence (ASI), which is much smarter than the best human brains in practically every field (Page 10). The author argues that our linear thinking and personal experiences often limit our understanding and acceptance of these concepts.


Research summaries written by ChatGPT