The state of generative AI
In this article, we will look at some of the benefits, drawbacks, ethical and legislative issues that are coming up in the wake of the generative AI revolution.
‘I imagine a world in which AI is going to make us work more productively, live longer, and have cleaner energy’Fei-Fei LiComputer Scientist and Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Artificial intelligence (AI) has become increasingly common in today’s world and now permeates many aspects of our life. From using Siri and Alexa right through to a simple Zoom call or Google search, these AI systems have well and truly woven their way into our everyday lives and become essential components of decision-making processes, data analysis, and customer interactions.
Amidst this AI-driven landscape, a new and very exciting branch of artificial intelligence has surfaced – generative AI. Popular generative AI tools such as ChatGPT, Google Bard, Dall-E and Bing have exploded into the public sphere, truly bringing us on the verge of a modern-day technological revolution.
But what is generative AI? Who better to answer this question than the AI itself… We asked ChatGPT and this was the response:
It is clear that generative AI is extremely important to not only our futures, but also our present. In this article, we will look at some of the benefits, drawbacks, ethical and legislative issues that are coming up in the wake of the generative AI revolution.
Effective innovation in a changing world
Innovation is everything. Without it, we would never develop new solutions, products, or tools. Innovation drives us forward, and it is this creative brilliance that has long been a key human trait.
In business, there is a delicate balance that needs to be found between allocating time and financial resources to running existing operations efficiently and innovating. So what is the right balance between running its existing operations and innovation? Typically, this would be broken down as follows: 10-20% innovation and 80-90% operations.
It is important for companies to protect the financial viability of their operations, but at the same time, they must also go out and explore new and exciting opportunities, products and ways of doing things. In order to create this valuable innovation stream, there are two main methods.
The first innovation method involves continuous day-to-day improvements in a company’s operations by looking at each process that the company performs and refining it to make it more efficient. These baby steps add up over time to help achieve a much better product or service overall. It could be in the form of improving how a manufacturing machine works, removing unnecessary steps from a key process or providing better training to the company’s staff.
The second method of innovation is arguably the most important, as it involves grass roots innovation and creativity. In order to do this method successfully, companies must go out into the world and explore. By exploring both their industry and other industries around them, they may find inspiration from other techniques and processes that could help them in ways they would never have thought of had they not taken such proactive steps.
This method is great, but it does require funding. Companies can either allocate resources to their own staff members to carry out this innovative exploration or they could invite businesses or individuals to approach them with good ideas that have potential. If the company likes the innovative ideas, they may support them financially to help research, test and get their idea off the ground. This method of not only actively exploring innovation but actually inviting people to come to their company with ideas in order to secure funding is a stroke of creative genius, and certainly a great method for fostering innovation.
Innovation is what has brought us artificial intelligence. Once just the brainchild of think tanks and technical boffins, generative artificial intelligence is now a reality, and although still in its infancy, it is an extremely exciting notion that offers an almost unlimited realm of possibilities.
Will artificial intelligence take our jobs?
‘Artificial intelligence could potentially replace 80% of jobs “in the next few years”’Ben GoertzelAI expert
A common concern among both the general public and the business world is that at some point, AI will replace humans in their jobs. Many people worry that they will be ousted from their place of work and unable to make a living. But is this actually a serious possibility or simply an overreaction? Ben Goertzel doesn’t seem to think it is a bad thing at all.
‘I don’t think it’s a threat. I think it’s a benefit. People can find better things to do with their life than work for a living… Pretty much every job involving paperwork should be automatable.’
Artificial intelligence is not coming for our jobs, it’s coming for outdated processes. For example, Anthony Peake, CEO and Founder of Intelligent AI, stated in an interview with Insuretech Insights that AI augmentation is not likely to replace the current roles of risk engineers and underwriters in the insurance claims industry, but rather transform the way they work on tasks. Peake claims that insurers spend 80% of their time doing admin and just 20% of their time actually working with clients. With AI, these numbers could be flipped, allowing insurers to spend more time on value-adding tasks and less time on admin.
If we look back in history at each time a major technological development was made, we as humans have always been fearful of it. This fear of the unknown is natural, but that there is no reason to be afraid. However, no matter how smart artificial intelligence systems are, humans are highly creative and will always likely have the upper hand against machines.
What are the capabilities of artificial intelligence in the near future?
Thanks to the recent boom in generative AI systems, it appears the sky’s the limit when it comes to potential opportunities! For the immediate future, it’s likely that AI systems will simply continue to develop and mature, as even though AI has been around for a while, it’s still young, so there is still a lot of growing to be done.
Despite the current infancy of generative AI, its language capabilities are the most exiting feature right now. Narrow AI systems have been used for more than 10 years already, but this language-producing generative form of AI is really opening up a world of possibilities for us.
AI now has the chance to become creative and really learn from the huge amounts of data. It is beginning to emulate human behaviours and intersect itself with other technologies in an incredible way (such as image-to-text capabilities).
One example of a recent development in generative AI is AutoGPT. AutoGPT is an “AI agent” that can analyse natural language in an effort to create it itself by breaking down the language into sub-tasks and using the internet (and other tools) in an automatic loop. This ability to interconnect its operations and solve complex problems is unprecedented, making it an incredible tool. This is just one example of how the future landscape of generative AI may look…
The race for a general AI
With the various tools and companies involved in the AI development race at this time, you would be forgiven for feeling a strong sense of ‘early-to-late 90s’ déjà vu. At that time, there was quite a range of internet search engines around, with no obvious winner. By 2000, Google had established itself as the dominant search engine and it remains so (for the Western world at least) to this day.
The current AI landscape is highly reminiscent of the late-90s search engine battle. There are many companies currently weighing in – from OpenAI’s ChatGPT to Google’s Bard – but who will come out on top?
If history is anything to go by, the writing is already on the wall – there will eventually be one general AI system that is used throughout the world.
The ethical concerns of artificial intelligence
Naturally, the development of AI technologies comes with many questions on ethics. These concerns generally fall into the following categories:
At the forefront of these concerns are issues relating to the risks AI poses to humans and evaluating whether these risks are manageable.
The Future of Life Institute published an open letter in March 2023 urging a pause on all AI technologies for a 6-month period until we better understand their potential impacts. Signed by numerous industry giants (such as Elon Musk) this letter called on big tech to think very carefully about their development of AI systems.
‘Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable’.The Future of Life InstitutePause Giant AI Experiments: An Open Letter, 22nd March 2023
There seems to be a general agreement regarding these concerns, with many people feeling that generative AI tools should only be launched and made available to the general public once they had been better tested, trained and corrected for biases.
Unfortunately, these generative AI systems are not perfect at the time of launch, and can often contain many flaws. Once they are available to the public, it’s much harder to control these risks, meaning that the best course of action would be to test and develop them more thoroughly before releasing them.
It is often then case that big tech works based on the theory that ‘it’s easier to ask for forgiveness than for permission’, which is not always a bad thing, but sometimes this can go too far. With the current generative AI models out there available to the public right now, there are still too many unanswered questions that can affect security on personal, commercial and even national levels.
However, that being said, it is also very true that flaws in a system of any kind could be a strong driver of innovation. As soon as a new technology comes out, especially if it’s not ‘perfect’, it is then open to the public to scrutinise and develop. This offers fantastic opportunities for even more innovation than if the company had kept the product to themselves until they were 100% happy with it.
Data privacy and GDPR
Generative AI models are set up for language optimisation – not accuracy. This means that there is certainly the possibility of ‘wrong answers’, as the model is focusing first and foremost on generating the correct language, rather than searching for the right answer. This begs the question of whether the AI algorithms are set up to analyse and use the data they receive to train their models in order to get better, more accurate results. If so, what are the security and privacy rules surrounding this data?
These generative AI models are set up so that data is ‘anonymous’ – but is it, really? In actual fact, it is possible to extract personal information from the data, and it is this possibility that is blurring the lines when it comes to privacy laws and GDPR.
On the 31st of March 2023, Italy’s data regulator, Garante, temporarily banned ChatGPT over data security concerns. On the 12th of April 2023, Italy’s data protection agency sent a list of demands to ChatGPT’s creators, OpenAI, asking them a range of questions based on their privacy and data management concerns, giving them a month to respond. As of the end of April 2023, OpenAI did respond to the request and ChatGPT was once again accessible in Italy.
Italy’s concerns are valid, and they stand as an important benchmark for both international governmental regulators and the AI tech firms themselves. A new set of GDPR regulations is required in order to take control of data privacy, management and security of these generative AI systems before it gets out of control.
The difficulty of regulating generative AI systems
Technology moves fast, but by its very nature, generative AI technology is moving at lightning speed. This is great in terms of general technological development, but it makes regulating these technologies incredibly difficult to keep up with. Each time a set of regulations is put in place, the technology has already moved on.
As well as data security and privacy, there is also the issue of copyright. When we use generative AI systems such as ChatGPT, who do the results legally belong to? Are they the sole property of the user who generated them or does the company running the algorithms also have some sort of claim on them? What about in terms of the data produced – does the company have a legal right to use this data to train iterative models? Does using publicly available data infringe on any copyrights? Simply because something is available doesn’t mean that an individual or company has a right to it, so what is the legislation governing this issue?
At this time, there is no clear answer. What is clear is that we need to have these frank conversations now while we still can. Governments and big tech need to come together at the table to hash out the finer details to maintain the security of personal data, copyrights, safety and security.
There are many positives when it comes to generative AI and its future possibilities. If implemented effectively, we can expect to revolutionise our processes, thinking strategies, content creation and administration. There are issues to watch out for, absolutely, but if we get on top of them now, the sky is truly the limit. Generative AI is not something to be afraid of, but it is certainly something we need to approach with great care.
If you would like to learn more about the state of generative AI in insurance please follow this link to watch IT Insights: InsurTalk interview with our Generali guests, Emanuele Colonnella and Danilo Raponi.