AI The Laughing Machine Piquant Blog Post

The Laughing Machine: AI and the Creative Sector

30 May

Posted by
Sorcha Fitzgerald

Fear and the Unknown

On a bright summer’s day in 1826, Joseph Nicéphore Niépce created what is considered the first photograph by exposing a light-sensitive, pewter plate to light through a camera obscura at his country house, capturing the view from his window. He named this process heliography. While some hailed this new technology as a democratised art form accessible to the masses, others, like French painter Paul Delaroche declared “From today, painting is dead”. This new technology provoked anxiety in artists of the period, and the future of painting seemed to hang in the balance. 

Photography didn’t bring about the end of painting but it had a hand in transforming it. Since photography could portray the world as it was, the meaning and purpose of painting had to be reimagined. Throughout the late 19th century and early 20th century, the world saw the emergence of modern art; Dadaism, Cubism, Surrealism, and Abstract Expressionism to name a few. These artists depicted scenes outside the scope of the camera, images that played with form and subjectivity. In Pop Art, artists brought the reproductive qualities of photography into their practice. Photography didn’t kill art, but it fundamentally altered its trajectory.

In 1968, shortly after representing Britain at the Venice Biennale, artist Harold Cohen moved to San Diego, where he pursued an interest in computer programming. He invented a small machine that used artificial intelligence to move along a surface, such as a canvas. When armed with a pen or brush, the small machine could generate images. Cohen wanted to codify the act of drawing and in doing so invented the first machine capable of producing AI-generated imagery. He called the machine AARON.

AI and the Creative Sector

AI technology has come a long long way since the days of AARON, from recommendation systems on e-commerce websites to the dramatic introduction of general-purpose large language models. AI is quickly transforming our digital environments and our working world. Despite suspicions, AI is quickly taking root within the creative sector where it is deployed to assist in writing code, quickly brainstorm ideas and, the way I use it most, as a very enthusiastic research assistant. This technology can quickly handle huge amounts of data, so if you need to identify trends within large research documents, or condense the word count on a pitch document, ChatGPT or Google Gemini will seem like a godsend. 

Not all AI is general purpose, existing products are constantly integrating AI solutions to cater to specific needs, whether that is generative technology in Adobe’s suite of products that allow for speedy image and video editing in Photoshop and Premiere Pro, vector generation in Illustrator, or even presentation layouts in Figjam. Almost every other day new AI-powered services are being unveiled.

These tools can streamline the more procedural aspects of one’s work and can help bridge knowledge gaps. AI operates best when it is left to handle the drudge work. Need to remove some clutter from the background of a photo? No problem! Need to draft a schedule structure? You got it! Need to scale up some blurry photos? Easy! Problems arise when AI takes the lead. Unfiltered AI content is boring. That makes sense when you understand the technology, it bases any answer on existing content from its dataset. In some ways it can be seen as an aggregate tool, the answers are median. It does not, and cannot, think of novel or dramatic solutions. It should assist you in arriving at solutions, but should never be assumed to provide you with a fully formed and workable solution, especially for a creative problem.

Discussions about AI often centre on automating mundane tasks that are time-consuming and prone to human error. A shift that allows creative professionals to concentrate entirely on creativity sounds great, it’s the best part of the job after all. However, creative problem-solving requires a substantial amount of energy and can be exhausting. The expectation to engage in prolonged creative activities increases the risk of burnout or creative block.

Personally, I have always valued a mix of intensive creative tasks and, simply put, drudge work. These menial tasks provide mental rest and allow ideas to gestate and mature, which typically yield valuable results when I return to a creative exercise.

If AI assumes all routine tasks, it will become crucial to incorporate mental breaks into our schedules. Continuously demanding creativity from professionals without adequate downtime is not only unrealistic but potentially harmful to mental well-being and sustained creativity within the sector. Sustainable solutions for challenges like this will not be discovered in Silicon Valley but through open and honest dialogue across creative industries.

The Laughing Machine

We’re experiencing a moment of hyper-awareness and media fixation on this technology. The discourse surrounding AI is both unrealistically idealistic and hopelessly fatalistic. To Silicon Valley ideologues, it is a miraculous technology that will reshape all aspects of modern, western society (not before a massively lucrative fundraising period). There is some truth in that argument, the technology has already found footing in medical industries, security systems, and customer service roles, where it can outperform human efforts impressively. 

On the flip side, sceptical voices point towards ethical issues surrounding copyright, data harvesting,  job displacement, and bias. Again, all of these concerns are valid. AI companies have been caught harvesting copyrighted material across the internet, The Institute for Public Policy Research has stated that 8 million UK jobs are at risk due to AI, the effects being mostly felt at entry-level, part-time and administrative levels. Bias in AI systems is a constant problem across AI products. 

I’ve often heard the term “AI will not replace us, but those who use AI will replace those who don’t”. This feels like an overly simplistic phrase that puts the onus on individuals rather than AI companies or regulatory bodies that oversee the proliferation of this technology. The regulation and ethical development of this technology will be vital, which is why the laissez-faire approach to regulation afforded to social media companies previously, is not viable. If anything, AI should demonstrate the necessity of systems designed to protect individuals and workers, such as a universal basic income.


A central question surrounding the technology is, simply put, where is this technology headed? With the release of ChatGPT 4o we were introduced to a system that could speak back to us in real time, with common linguistic flaws and characteristics that make it seem more human, such as the ability to laugh or sigh. The chosen voice sounded eerily similar to Scarlett Johansson’s AI character from the film Her (2013).

The thing is, Johansson didn’t give Open AI permission to use her voice, and a legal battle will most likely follow suit. ChaptGPT 4o raises questions around personal ownership, autonomy, the digital commodification of our bodies, and the personification of AI technology. If used without diligence and care for the people around us, this technology can quickly become dangerous. Safeguards to prevent mimicry of our work and our very likeness are needed, not tomorrow but right now. 

AI is here and it’s not going away. Every single one of us should strive to become literate of this technology, and discerning of its use, regarding our own workflows, and in the media we consume on a daily basis. We need media literacy more than ever, and before believing any information being presented to us as fact, we need to ask ourselves critical questions about the source of information, the methods used to present information, and the intention of the message. By educating ourselves about AI, we can better advocate for transparency and accountability in how these technologies are deployed.

Harold Cohen once said “We are in the process of coming to terms with the fact that “intelligence” no longer means, uniquely, “human intelligence”. As we automate much of our work, the value we bring to any creative task will not be measured by technical excellence or precision, but by our uniquely human traits. Our cultural awareness, cultural specificity, emotional intelligence, empathy, and those little excentricities derived from our unique personal histories.  As AI technologies and their capabilities continue to grow and weave into the fabric of our lives, we need to learn how to live with this new technology in a way that benefits humanity, rather than rendering the most vulnerable among us obsolete. So don’t mimic the machine, more than ever we need to embrace the very things that make us human.