July Conference: AI debate

The Workers Party is holding a Day Conference on 1st July to discuss how to rebuild British industry. The Conference is for Members Only and invited guests. In the run up to Conference we are publishing a number of pieces submitted for the purposes of stimulating discussion around the theme of reviving British industry, the challenges of AI and automation and the role of vocational education.

All our articles in this series are produced by ordinary members, we have no gilded theoreticians to hand down answers. Our struggle is in learning how to think, rather than being told what to think. Our party encourages the active participation of the membership in discussing shared problems and our policy response. This piece has been submitted by Nikola Bryce.

Does automation and AI benefit the workers under capitalism?

According to Goldman Sachs, 300 million full time jobs world wide will be lost or degraded by AI. To put this into perspective, it is the equivalent to the approximate combined populations of France, Germany, UK and Italy. Some estimates have gone as high as 800 million by 2030 with predictions AI could leave the workforce “totally unrecognisable” by 2040.

AI in the 70’s and 80’s was considered a failed science, however since the early 1990’s it has developed at an exponential rate. AI can do anything, from flipping burgers and making fries in fast food restaurants, diagnose a one in a 100,000 medical condition in seconds to suggesting 40,000 new possible chemical weapons in less than 6 hours. AI is here to stay and it doesn’t need a pay rise, a holiday or time off sick.

Everyone from politicians, employers to workers are in danger of being left behind in the wake of this fast moving area. The Guardian reported in May 2023 that Sir Patrick Valance, the government’s outgoing scientific advisor urged “government should ‘get ahead’ of the profound social and economic changes that ChatGPT-style, generative AI could usher in.” Even AI developers have awoken to the potential threats of AI’s rapid advances.

AGI, Artificial General Intelligence has been likened to an existential threat to humanity. The Centre for AI Safety published a statement supported by dozens in the industry stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Sam Altman, CEO of OpenAI, a supporter of the above statement was asked in an interview for his thoughts on AI worst-case scenarios, he said it, “is, like, lights out for all of us.”

One cannot help but reflect on Oppenheimer’s expression of regret on his role in developing the nuclear bomb, quoting from the sacred Hindu book Bhagavad Gita he said, “Now I have become death, the destroyer of worlds.”

The future of life Institute published an open letter this April with nearly 30,000 signatories, including Elon Musk and co-founder of Apple Steve Wozniak asking for a pause in the work of advanced AI models for at least six months. Warning of the dangers posed by advanced AI a section of the letter stated: “Should we develop nonhuman minds that might eventually out number, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation? Such decisions must not be delegated to unelected leaders.

Regulation of AI has not kept pace in any meaningful way to keep us, and it safe. The Guardian reported that Sam Altman, OpenAI CEO, “told an event in London that, while he did not want the short-term rules to be too restrictive, ‘if someone does crack the code and build a superintelligence … I’d like to make sure that we treat this at least as seriously as we treat, say, nuclear material’”. Whilst Google chief Sundar Pichai stated in an FT article: “AI is too important not to regulate, and too important not to regulate well.”

Despite AI leaders’ warnings on the potential dangers of AI, up to and including the destruction of humanity, the UK Government in its stated objective to make us an “AI superpower” presently sees no need to bring in new legislation. Since 2014 the government has spent over £2.5 billion in this area with hundreds of millions more in research and development, contributing £3.7 billion to the UK economy and providing 50,000 jobs. It will instead, choose to create a regulatory environment where innovation can thrive. As laid out in a recently issued White Paper, initially work will be conducted on a sector based approach, where the information Commissioner, Equality and Human Rights Commission, the Financial Conduct Authority will guide and police the development and deployment of AI based on a non-statutory framework of five common principles. To fail to protect British citizens with robust legislation in the light of such dire warnings from experts in the field is surly a gross act of malevolent fatuity.

It is predicted AI could transform the UK labour market in as little as 15 years. Uncertainty of what the changes will look like within this fast changing landscape presents real challenges and concerns in the workplace and Trade Union movement. AI systems in the workplace without well thought through policies, procedures and backed up by legislation could be like installing a sociopath at the heart of the organisation.

In the US and Europe, approximately two-thirds of jobs are already subject to the implementation of some form of AI. In the UK it is increasingly used in the recruitment, hiring and firing process by employers to reduce time and costs. These systems can exhibit ‘bias’. Programmers can imprint their race, gender, social economic bias etc into the technology. During any of these processes if a worker believes they have been unfairly discriminated against they will have to show they were indirectly discriminated against because of an algorithmic preference trait, whilst the employer will need to demonstrate their systems have been ‘bias’ stress tested. It will be interesting to see how case law develops in this area under the Equality Act 2010. Whilst there is no legal minimum cap for unfair dismissal, there is mechanism depending on age and years worked to calculate compensation, however the average award is between two to three months’ gross salary, little comfort to workers in what is to become an ever diminishing AI driven labour market.

The much vaunted EU AI bill is expected to come into force in 2025. The bill has allocated applications of AI to three risk categories: “Unacceptable” – posing a serious threat to fundamental rights and freedoms; “High” – posing significant risk to the health, safety, or fundamental rights and freedoms of people; and “Low” – systems that pose no risk. The bill is incredibly wide ranging and is heralded as the first of its kind, expected to set the AI legislative standard internationally.

According to Harvard Business Review: “Experts predict that artificial intelligence at a larger scale will add as much as $15.7 trillion to the global economy by 2030.” With 60,000 unregulated lobbyists infesting the corridors of Brussels with their bulging accounts and unfettered influence, we shall have to wait and see if the bill’s contents and aspirations survive the onslaught of high tech lobbyists’ vested interests when anticipating such riches.

Protective measures are beginning to filter through in some countries. For example in Germany, employers have an obligation to consult with the Works’ Councils when introducing AI in the workplace. In New York state, recent legislation prohibits the use of AI unless subjected to a “bias audit” and its use disclosed in employment decisions with the opportunity to request alternative processes. The Italian Data Protection Regulator has recently issued a temporary ban on the processing of personal data by OpenAI, on the basis of: what the data of individuals is used for; data inaccuracies; and no age verification for those under the age of 13.

AI does not need to be the dystopian future as featured in Blade Runner’s Tyrell Corporation, although it would appear or is implied from the concerns loudly and consistently voiced by leaders in the field, anything is possible. The military is currently embracing and applying all that ChatGPT has to offer and one can only imagine what horrors the military industrial complex and the deep state are capable of developing.

During what is being referred to as the fourth industrial revolution, references are often made to the Luddites. However this industrial revolution is different from all the others, as not only is AI replacing our bodies, but also our minds. AI has smashed through the class divide. In the fast pace of this globalist revolution few jobs are safe. Whilst we cannot sabotage every company ChatGPT programme or smash every robot in sight, we must fully engage with what these radical changes mean for ourselves and our families in our ability to earn a living and lead fulfilling and creative lives. We have all seen the impact closures of the mines, the steel industry and the erosion of the fishing industry had on their communities and make sure history does not repeat itself on a larger scale. We need to engage in a nationwide debate on what AI means to us all and hold those in power to account, because if left unchecked many will be left on the scrap heap, our brains and bodies deemed not fit for purpose in the 21st Century.



Leave a Reply