ChatGPT and safety concerns around new AI technologyWe’ve all come across works of dystopian fiction that depict a world that has been overrun by AI with humanity struggling to survive complete annihilation. An example that most people are familiar with is the film series ‘The Matrix’ whereby AI enslaves the human population and uses their life force as a source of energy to sustain itself. Previously, most people might have looked at these works as just the over-active imagination of science fiction writers but lately, this kind of future seems to be looking increasingly possible. Many are worried that emerging artificial intelligence technology such as OpenAI’s chatGPT bot which is designed to mimic humans, is going to usher in a new era whereby AI will be at the forefront with humanity being neglected. These thoughts might end up causing existential dread for some and we need to ask ourselves, is this future a likely possibility and if so, how can we avoid it? Well, the likelihood of an AI apocalypse occurring is pretty low at the time of the writing of this article however other valid safety concerns have been raised. Let us consider some of these and look at how the openAI team has tried to address these issues and mitigate any harmful effects. First, one concern that has been raised by many is that use of chatGPT will lead to the spread of misinformation. The AI is trained using data generated by humans and the information it provides can only be as accurate as that with which it was trained with. Currently, a team of machines and human fact-checkers are used to identify and correct any misinformation produced by ChatGPT. Also, feedback from users is used to help correct the AI. So far, this technology is in its infancy hence the risk of the spread of misinformation is pretty high since those who are most likely to be affected by misinformation lack the tools or knowledge required to verify claims made by the bot. This a huge safety concern for the AI that we hope shall be dealt with in the near future.Another concern that has been raised is that this AI can be used in identity theft. ChatGPT has become adept at mimicking well-known public figures such as politicians or businessmen who are in the public eye. This is because a large amount of data pertaining to these individuals is readily available to the bot. It can find speeches and articles written about these individuals online and due to the size of these data sets, the bot is able to come up with accurate imitations of these individuals. Fortunately, for those of us who do not have a lot of personal information about us on the internet, this is not yet a major concern. However, the bot’s ability to imitate world leaders is becoming concerning as individuals with malicious intent might be able to take advantage of this bot and use it to spread misinformation. The consequences could be dire if we are not careful. Thankfully, open AI is constantly striving to improve its bot and always adding new safety features to try to mitigate these issues.