Real estate professionals and agents are taking their businesses to the next level with the game-changing power of artificial intelligence (AI) tools like ChatGPT and deepfakes. But there’s a catch — cybercriminals are also getting in on the action, using these technology tools to create convincing disinformation, impersonations, and even fake properties and images. The real estate industry is vulnerable with the large amounts of money involved.
Agents should continue embracing AI since it is here to stay and an excellent tool for productivity. But the key is being educated on the dangers of cybercrime and the cyber threats created by AI, while protecting their clients, too.
1. Using ChatGPT: The good, the bad and the criminal
Open.ai’s ChatGPT tool allows the user to input a query, question or request in the search box, producing impressive content in a perfectly written manner. They can then use Grammarly, another AI tool, to edit and polish their writing and to ensure it is grammatically correct.
In the past, we could quickly identify scam emails because they were so poorly written. However, cybercriminals can now create fake website content, emails, social media, texts and blog posts that are convincingly close to professionally written content.
Criminals can prompt the tool to write a highly persuasive sales text or email or to elicit a strong call to action when sending a criminal phishing email, seeking bank account or personal information and so on. It can even create and translate to and from other languages, expanding the victim base.
2. ChatGPT and wire fraud
According to the FBI, BEU (business email compromise) is the fastest-growing, most financially damaging internet-enabled crime. There were $2.4 billion in losses in 2021.
REWF (real estate wire fraud) is now a sub-category for crimes in which cybercriminals target companies and individuals executing large wire transfers related to real estate transactions.
Email is the easiest way to infiltrate real estate transactions. ChatGPT allows cybercriminals to craft manipulative email phishing letters, often combined with social engineering. Social engineering occurs when cybercriminals “trick” people into sharing information they typically would not share, designed to steal their identity.
ChatGPT allows criminals to gain the confidence of real estate agents, lenders, buyers, and sellers acting as one of the parties by producing convincing yet, fraudulent:
- Social media posts
- and other real estate documents
A cybercriminal could also use ChatGPT to impersonate a lender or title company and generate correspondence that mimics legitimate requests for wire transfers, designed to trick individuals into providing bank account information, authorizing the transfer of funds to a fraudulent account, or providing false information. Agents should have a wire fraud disclaimer in their email signature and sign a form educating consumers of the dangers.
ChatGPT could also generate correspondence containing malicious software and links that can compromise the security of individuals’ computers or mobile devices or contain malware. The malware could be designed to steal sensitive information, such as bank account or login credentials, or to allow the cybercriminal to gain unauthorized access to individuals’ devices or accounts.
Agents need to brush up on their email cybersecurity practices by:
- Enabling two-factor authentication on all accounts and devices
- Creating strong pass sentences/phrases as opposed to passwords
- Learning how to detect AI-generated content
- Spotting fraudulent emails
- Determining which links not to click or attachments not to download in emails
- Installing anti-virus software
- Regularly conducting security updates on all devices
Michele Bellisari, an agent in Boca Raton, Florida, is an early adopter of ChatGPT. She uses it to create YouTube video descriptions, generate social media posts and blogging topics, property descriptions, and more.
She is also a cautious user, saying, “I always question the authenticity and verify the legitimacy of all correspondence. Now knowing that criminals use these tools presents a threat to our and our client’s security. However, we don’t have to be helpless victims. By staying vigilant, verifying identities, and adopting best practices for online safety, we can stay one step ahead.”
3. Spreading disinformation with ChatGPT
Chat GPT can be misused to spread disinformation by creating fake:
- Profiles and personas to engage in criminal and fraudulent activities
- Social media posts
- Properties that don’t exist or are not for rent or sale.
Cybercriminals can also use AI to pose as real estate agents or buyers or sellers to defraud the other party.
With their well-written content, criminals could hijack property listings or create fraudulent property listings similar to Craigslist crimes to entice buyers or renters to pay deposits or down payments for properties they will never receive.
Sellers are at risk when criminals create false documents, including IDs, approval letters, and employment documents, by using ChatGPT to portray that they are legitimate buyers.
According to Daniel Smith, founder of Keepingly, a technology company that is an online home management system designed to keep a homeowner’s records all in one digital location, and a new ChatGPT user, he understands the cyber risks better than most.
“With prevalent misinformation and disinformation, users should check and double-check everything. Chat GPT content poses some risks to the relevancy of the information it provides to end users. People also need to know that its knowledge stopped in 2021.”
4. Deepfake dangers: Why you can’t believe your eyes, or your ears
Deepfakes are computer-generated media, including images, videos and audio recordings, that are manipulated to make them appear as if they are real. Although they have legitimate uses, they can also be used for malicious purposes, such as spreading false information or defaming individuals. In the real estate industry, deepfakes can pose a significant danger for real estate agents, sellers, and buyers.
Deepfakes allow criminals to put the face of their target or whomever they are impersonating onto another body and make it say or do anything they want it to in realistic videos. For example, they can imitate agents, buyers, sellers, or other real estate professionals to mislead or get financial information. They can also capture the voice of their target and impersonate them on phone calls and request information intended to victimize.
Rental properties and those listed on short-term rental platforms, like Airbnb and VRBO, are cyber targets, and AI can be used to victimize unsuspecting renters and guests. Vacationers and even visiting nurses are targeted by scam artists creating deepfake property images. They advertise them to people who are often pressed for time and unable to check the unit out in advance. They often pressure clients to wire money to lock in the too-good-to-be-true price.
A residential investor with properties listed on the Airbnb platform, Stacey Johnson-Cosby of Kansas City, Missouri, and her husband and partner, Dewayne Cosby, hired a management company to manage their out-of-town short-term rental units. She also relies on the security checks within the Airbnb system. In addition, she was impressed to learn that Airbnb uses AI through machine learning and predictive analysis to prevent fraud.
As an agent of 35 years, a national housing speaker, and someone who’s involved in her community, Johnson-Cosby says she appreciates anything that can help her work more efficiently.
“As an agent and investor, I’m always looking for ways to protect our clients, especially now that there are different and emerging real estate investment business models that we need to understand fully. I’m new to these AI tools, and I appreciate working more efficiently, but I also know that I need to protect myself and my clients from these new cyber fraud threats, too.”
5. Staying safe and protecting the consumer
The cyber criminals’ efforts at phishing, email and texting schemes will only escalate with the growing popularity of AI applications such as ChatGPT and deepfakes, and others that have flooded the market recently. Therefore, agents can add another service component to the relationship with the consumer by giving them a heads-up about these new scams in an effort to lead with safety.
Agents are advised to be extra skeptical. Jodi Kaplan, a broker at Gulf Coast International Properties of Naples, Florida, is a new ChatGPT user. She often works with out-of-town clients in the seasonal home market and has always been cautious, but she understands she needs to double down on verification.
“I pull a title search or property deed. I need proof of the property. I also use the TitleNow app to verify ownership. Finally, I use Google Earth and look at the history of the property. If it sounds too good to be true, it usually is.”
Whether it is a deepfake or ChatGPT content, exercising caution and conducting your due diligence is crucial. Fact-check all content for accuracy in case it is AI-generated. If you use AI, always fact-check the information. To protect yourself from falling prey to a scam, take the time to verify the authenticity of the individual or organization requesting money or personal information before sending anything.
AI checking tools to use:
- Verify properties using Google Maps and the street view features.
- Check people and property images on Google Lens.
- Closely study the images and look for tell-tell signs, extra limbs, blurred edges, unblinking eyes, etc.
- Utilize AI detection tools, although they are a new technology and not completely accurate, yet.
Tracey Hawkins is a former real estate agent, international real estate safety and security expert, instructor, and keynote speaker. Connect with her on LinkedIn or Instagram.