What are we going to do about AI and what’s it going to do about us? Workers and automation: Recommendations (2/2)

You may have noticed that the Script Writers Union and the Screen Actors’ guild of America are on strike [1]. Part of this is their fear that movie producers will be able to replace both with generative AI and Avatars. We used to assume that AI would only take away the repetitive jobs, leading to the value of human endeavour being directed towards the arts and other creative pursuits.

Well, it seems the poets, artists, musicians and videographers are all getting worried about their jobs now too as AI starts to sweep arts competitions [2] and make music in their name [3]. It feels similar to the time that the RCA A&R man looked suspiciously at Spotify before being handed his P45.  

So unlike the lawyers in the previous post who fear that they will no longer be able to “exploit” their juniors to direct concentrated wealth to the partners, the actors and screenwriters fear the opposite – that producers will be able to capture the value because their unique-style and star status lets them earn more than their peers. Two opposite sides of the same coin.

Will this mean we see actors’ sometimes enormous fees also follow a race-to-the-bottom cost curve as those who license their likeness [4] start to suck up the jobs by undercutting the competition (box-office stardom as a passive income stream, anyone)?  We are also seeing large companies like Google [5], Microsoft [6] and Adobe [7] incorporate elements of generative AI into the tools they release. These are no longer esoteric features embedded in the tools used by the professionals who sell creative output; a process that we might call creative-disintermediation. It will do for your graphic designer what Amazon did for your book shop. 

What can you do about this? Well you can’t stop it. Trying will be a bit of a King Canute moment we think. So: 

– Many businesses should try to use GenAI tools to cut costs rapidly in the creation of content. This could include adding more creative content and personal touches to customer interactions; 

– Don’t rely on overcharging for repetitive work. This is likely to be a losing proposition going forward and you should expect juniors undertaking repeatable processes to find ways to automate them (extra warning: here be startups); 

– The apprenticeship model and the assumptions of the elapsed period to become “time served” may change. This might lead to difficulty in hiring junior staff and large lateral career moves that are currently impossible within a normal working life-time; 

– Until AI is “proven in battle” there will be a risk-mitigation premium attached to reassurance from trusted advisors – after all, investment banking still charges a premium; and

– The power of human networks, families, friends and fashionistas will still drive the distribution of power (and wealth) in non-economically rational ways.

[1] https://www.theguardian.com/culture/2023/jul/15/actors-writer-strike-stop-hollywood-what-movies-tv

[2] https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html

[3] AllttA – Savages (YouTube)

[4] https://uk.news.yahoo.com/michael-douglas-plans-license-name-121838765.html

[5] https://workspace.google.com/solutions/ai/

[6] https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/

[7] https://www.adobe.com/uk/sensei/generative-ai.html

What are we going to do about AI and what’s it going to do about us? Workers and automation (1/2)

With great power comes great unemployment. Or at least, that’s the fear that many have, especially with every new report from a bank [1] or tech company [2], making headlines for predicting the hundreds of millions of jobs that will be lost to automation.

During discussion, our members had mixed views. First, they concluded that the ability of AI to codify skills and then transmit that information far and wide would mean that “know-how” would be rapidly distributed across the globe. But crucially, this isn’t just the spread of familiar ‘know-how’ of the sort we’ve come to use every day, like books, courses and how-to YouTube videos, but also ’know-do’. That might mean packaging years of deep human expertise into an algorithm and then throwing the software into the field to help humans make difficult decisions, or sometimes making decisions for them.

Interpreting patient CT scans is a perfect example. Research teams across the world [3,4,5] are helping to develop algorithms that can spot the signs of cancer more accurately, more consistently and earlier than doctors and other healthcare practitioners working unassisted. And this means earlier, more targeted and effective treatments can be provided, improving patient outcomes and making better use of hospital resources, whilst general-purpose medical assistants accepting mixed data inputs are not far behind [6].

This is great news for both advanced hospitals in developed countries as well as less fortunate areas. Advances like these help to rapidly level the playing field from a healthcare standpoint, increase overall capacity and fill skills gaps in the workforce (and do them faster and more accurately), and ultimately drive down costs.

While this will sound like be good news if you are sick, or buying services, it’s rather less good if you were the person that used to do the job. There is a well-rehearsed argument that this will free up people from menial tasks and provide them with more fulfilling roles. Inter-generationally this might be true, but if you’re a 50 year old radiologist you’re probably a bit miffed at the prospect.

The network members also pointed out that there is a succession and apprenticeship problem. Many of the skilled workers at the pinnacle of their careers – the ones freed up from the menial tasks mentioned above – got that way by working their way up through a system. The repetitive work forms the basis of the training. Not only that, there will be a problem with the vesting process in, say, a legal firm. The monetary benefits of employing large numbers of trainees at low cost and charging out their time at full rates (so you can reap the profit) may be a model open to destructive competitive forces unleashed by AI.

Meaning, we may start to see the pyramids crumble.


[1] https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html 

[2] https://arxiv.org/abs/2303.10130 

[3] https://www.royalmarsden.nhs.uk/news-and-events/news/ai-could-help-doctors-diagnose-lung-cancer-earlier

[4] https://ascopubs.org/doi/full/10.1200/JCO.22.01345

[5] https://www.fiercebiotech.com/medtech/google-s-cancer-spotting-ai-outperforms-radiologists-reading-lung-ct-scans 

[6] https://www.linkedin.com/posts/vivek-natarajan-a3670118_medicine-is-a-multimodal-discipline-absolutely-activity-7090202293710557184-Vtie/

What are we going to do about AI and what’s it going to do about us?

What are we going to do about AI?  It’s all over the press and I wouldn’t be surprised if ChatGPT is named the New York Times person of the year. ChatGPT, however, is far more modest. When asked to predict who might be named, it doesn’t speculate much, and instead reminds me of Capt. Kirk asking Dr. Spok a cheeky question about kissing Lt. Uhura: 

As an AI language model, I don’t have the ability to predict or determine who will be named the New York Times Person of the Year for 2023 or any other year. The selection process for the Person of the Year is conducted by the editorial staff of the New York Times, and it typically recognizes individuals or groups who have had a significant impact, influence, or prominence during the year.

While AI and technology have the potential to shape and influence society in various ways, it is difficult to speculate on the specific likelihood of ChatGPT being chosen as the New York Times Person of the Year. Factors such as societal impact, cultural relevance, and the achievements of individuals or groups involved in AI development and application would likely be considered in the selection process. Ultimately, it would be up to the New York Times to decide who they believe deserves the title for that particular year.” 

Despite me trying to ignore AI and thinking that it was all just a bit of a new name for regression analysis, I have been quite shocked at how clever the generative results appear to be. So in Q2, after finally accepting we might be part of the ‘99% of people who are using ChatGPT all wrong’, we decided to ask a number of members of the Bestem Network to consider what the implications of AI could be and what the heck we can do to prepare.  The conclusion was that we should consider five areas in the immediate future: 

  • The impact of AI on industries and their workers; 
  • The interplay between climate change and AI; 
  • The implications for cybersecurity as AI becomes more important;  
  • AI and the battle for truth; and 
  • The impact of AI on wider society. 

Over the next few weeks we will publish four posts summarising our findings. Without spoiling the surprise, whilst humanity has faced plenty of disruptive phase shifts in the past, from ice ages to plagues and empires collapsing, it looks increasingly like this one could determine our long-term future. And to make it work, we’re also going to need to invent a whole new level of international collaboration, one that makes our existing nuclear proliferation treaties look amateur.