Chat GPT the new AI technology. Have we not all come across works of dystopian fiction? that depict a world that has been overrun by AI with humanity struggling to survive complete annihilation? An example that most people are familiar with is the film series ‘The Matrix’. Whereby AI enslaves the human population and uses their life force as a source of energy to sustain itself.
These thoughts might end up causing existential dread for some . We need to ask ourselves, is this future a likely possibility and if so, how can we avoid it? Well, the likelihood of an AI apocalypse occurring is pretty low at the time of the writing of this article. however other valid safety concerns have been raised. Let us consider some of these and look at how the openAI team has tried to address these issues and mitigate any harmful effects.
First, concern that is araising by many is that use of chat GPT will lead to the spread of misinformation. Training of this AI is done using data generated by humans .The information it provides is only as accurate as which it was trained with. To identify and correct any misinformation that Chat GPT produces, a team of machines and human fact-checkers are used. Also, feedback from users is used to help correct the AI. However, this technology is in its infancy hence the risk of the spread of misinformation is pretty high . People likely to be affected by misinformation lack the tools or knowledge required to verify claims made by the bot.
This a huge safety concern that we hope is dealt with in the near future. Another concern is that it is being used in identity theft. Chat GPT has become adept at mimicking well-known public figures such as politicians or businessmen who are in the public eye. This is because a large amount of data pertaining to these individuals is readily available to the bot. It finds speeches and articles written about these individuals online. Due to the size of these data sets, the bot is able to come up with accurate imitations of these individuals.
Fortunately, for those who do not have a lot of personal information on the internet, this is not yet a major concern. However, the bot’s ability to imitate world leaders is becoming concerning . Individuals with malicious intent might be able to take advantage of this bot and use it to spread misinformation. The consequences could be dire if we are not careful. Thankfully, open AI is constantly striving to improve its bot and always adding new safety features to try to mitigate these issues.
contact us Today!