Connect with us

Tech

Humans must adopt ‘new way of life’ to beat AI’s sinister advantage, expert says

Published

on

Humans must adopt ‘new way of life’ to beat AI’s sinister advantage, expert says

HUMANS will need to change their behaviors to avoid falling victim to deceptive chatbots.

A cyber-expert has told The U.S. Sun how you must avoid putting too much trust in artificial intelligence.

1

AI chatbots are an amazing tool – but be very careful when using themCredit: Getty

AI chatbots are now increasingly popular, with tens of millions of people flocking to apps like OpenAI‘s ChatGPT and Google Gemini.

These chatbots use large language models that allow them to speak to you just like a human.

In fact, a study recently claimed that OpenAI’s GPT-4 model had passed the Turing test – meaning humans couldn’t reliably tell it apart from a real person.

We spoke to cyber-expert Adam Pilton, who warned that the humanlike way chatbots talk makes them much more capable of deceiving us.

“It feels as though it would be easier to be drawn in by the conversational nature of a chatbot, versus perhaps a deceptive website or search result,” said Adam, aCyber Security Consultant at CyberSmart and former Detective Sergeant investigating cybercrime.

He continued: “As humans, we build trust where we potentially see a relationship and it’s a lot easier and understandable to be able to build a relationship with a chatbot compared to a website.

“A website doesn’t respond to our specific requests whereas with the chatbot we feel like we’re building a relationship because we can ask it specific questions.

“And the answer it gives us is tailored to specifically address that question.

“In this modern digital world we are living in a key skill that will now be the verification of information, we cannot simply trust what we are first told.”

Deepfakes more ‘sophisticated’ and dangerous than ever as AI expert warns of six upgrades that let them trick your eyes

SNEAKY SPEAKERS

Earlier this year, scientists revealed how AI had mastered the art of “deception” – and learned it on their own.

And chatbots are even capable of cheating and manipulating humans.

Spotting the signs that a chatbot is trying to trick you is important.

But Adam warned that we must now adopt a “new way of life” where we don’t trust AI chatbots – and instead verify what they tell us elsewhere.

What is ChatGPT?

ChatGPT is a new artificial intelligence tool

ChatGPT, which was launched in November 2022, was created by San Francisco-based startup OpenAI, an AI research firm.

It’s part of a new generation of AI systems.

ChatGPT is a language model that can produce text.

It can converse, generate readable text on demand and produce images and video based on what has been learned from a vast database of digital books, online writings and other media.

ChatGPT essentially works like a written dialogue between the AI system and the person asking it questions

GPT stands for Generative Pre-Trained Transformer and describes the type of model that can create AI-generated content.

If you prompt it, for example ask it to “write a short poem about flowers,” it will create a chunk of text based on that request.

ChatGPT can also hold conversations and even learn from things you’ve said.

It can handle very complicated prompts and is even being used by businesses to help with work.

But note that it might not always tell you the truth.

“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” OpenAI CEO Sam Altman said in 2022.

“Disinformation and simply incorrect information is going to be an increasing problem for society and democracies around the world as we continue to evolve in this digital world,” Adam told The U.S. Sun.

“As such the verification of information is going to be a common requirement and the use of chatbots is no different.

“We can no longer depend upon a single source of information verification from multiple trusted sources is now a way of life.”

SHARE CARE

AI ROMANCE SCAMS – BEWARE!

Watch out for criminals using AI chatbots to hoodwink you…

The U.S. Sun recently revealed the dangers of AI romance scam bots – here’s what you need to know:

AI chatbots are being used to scam people looking for romance online. These chatbots are designed to mimic human conversation and can be difficult to spot.

However, there are some warning signs that can help you identify them.

For example, if the chatbot responds too quickly and with generic answers, it’s likely not a real person.

Another clue is if the chatbot tries to move the conversation off the dating platform and onto a different app or website.

Additionally, if the chatbot asks for personal information or money, it’s definitely a scam.

It’s important to stay vigilant and use caution when interacting with strangers online, especially when it comes to matters of the heart.

If something seems too good to be true, it probably is.

Be skeptical of anyone who seems too perfect or too eager to move the relationship forward.

By being aware of these warning signs, you can protect yourself from falling victim to AI chatbot scams.

Chatbots will only become more popular over time as their capabilities grow.

But there are many risks, including giving too much of your own information over to them.

Experts recently warned The U.S. Sun about the importance of not telling an AI too much about yourself.

They’ve even been described as a “treasure trove” for criminals looking to find out info about victims.

Used safely, chatbots can be hugely helpful – but be careful not to tell them too much, and don’t trust everything they say.

Continue Reading