Solanasis Blog Post Drafts — How Hackers Are Using AI

This file contains 3 different blog post options on the topic of how hackers are using AI to make phishing, malware, exploits, and ransomware more dangerous.

Recommended default for Solanasis: Option A
It feels the most aligned with Dmitri’s later Solanasis voice: grounded, warm, direct, practical, and accessible.


Option A — Grounded, Founder-Led, Accessible

Hackers Are Using AI to Make Old Problems More Dangerous

We don’t need sci-fi to understand where this is going.

What we are seeing now is simpler, and in many ways more dangerous: attackers are using AI to make phishing, impersonation, malware work, and exploit research faster, cheaper, and more believable. Microsoft says AI can now help automate phishing campaigns, generate deepfakes, speed up vulnerability discovery, and support malware generation, which means the old attack paths are getting a serious upgrade.1

That matters because most organizations are not getting taken down by some mythical genius attack. They are getting hit because the basics are still too loose: weak passwords, weak MFA, old VPNs and firewalls, exposed remote access, untested backups, poor patching, and teams that are still far too trusting when a message looks polished enough.

Phishing Gets a New Engine

Let’s be frank: phishing is still one of the easiest doors in, and AI is helping attackers make that door look a lot more inviting.

A scammer no longer needs to be especially articulate, especially patient, or especially good at English to send a convincing fake invoice, fake login alert, fake DocuSign request, fake executive message, or fake help-desk email. AI helps them write better bait, tailor it to the victim, translate it cleanly, and spin up more variations faster. Microsoft specifically calls out AI-automated phishing and highly convincing fraudulent messages, while CrowdStrike reported a 442% jump in voice phishing, or vishing, between the first and second half of 2024.12

That last part matters more than many people realize. We are not just talking about suspicious emails with broken grammar anymore. We are talking about urgent phone calls, polished callback scams, QR-code lures, and even AI-assisted impersonation that can make the request feel normal enough for a busy employee to just go along with it. APWG counted more than 1.13 million phishing attacks in Q2 2025, and said the total kept climbing over the prior year.3

Malware and Exploits Get Easier to Build Around

We should also stop pretending the danger starts and ends with phishing.

Attackers are using AI on the technical side too. It can help them sort through documentation, write or debug code, translate tools, scan open-source material, identify likely weaknesses, and move faster through the boring parts that used to take more skill or more time. Microsoft’s 2025 Digital Defense Report even includes malware generation, exploit development, reconnaissance, and domain impersonation in its map of how AI is augmenting cyberattacks.1

That does not mean every criminal suddenly became a world-class exploit developer overnight. It means more bad actors can punch above their weight, and the ones who already had skill can move faster.

How the Ransom Story Usually Unfolds

Most ransomware events do not begin with a dramatic encryption screen. They begin much earlier, usually with some very human moment where trust was misplaced or maintenance was delayed.

Someone clicks the wrong link. Someone types credentials into a fake portal. Someone approves a push notification they should not have approved. Someone leaves a remote service exposed. Someone delays a patch because the team is busy and nothing bad has happened yet.

Then the attacker gets a foothold, steals credentials, moves laterally, grabs data, and decides how to monetize the access. Sometimes that means encrypting systems. Sometimes it means threatening to leak data. Often it means both.

This is one reason ransomware keeps hurting organizations even when leaders think they are too small to be worth targeting. Verizon says ransomware shows up in 88% of SMB breaches, and Sophos says the average recovery cost from ransomware in 2025 was about $1.5 million.45

So even when the ransom itself is not paid, the cost can still be brutal: downtime, cleanup, stress, lost trust, legal costs, delayed operations, and months of distraction.

The Real Penalty for Weak Basics

What AI changes most is the economics.

It lets attackers test more angles, produce more convincing lures, personalize scams faster, and accelerate parts of the technical workflow that used to create more friction. That means the penalty for weak fundamentals has gone up.

If your organization still has gaps around identity, patching, remote access, backups, or staff awareness, AI gives attackers more speed and more scale to work with. The issue is not that machines have become magical. The issue is that neglect has become more expensive.

Pro Tips We Recommend Clients Implement Now

Here are practical moves that dramatically reduce risk without requiring an enterprise-sized budget:

1. Put strong MFA on the accounts that matter most first

Start with email, Microsoft 365, Google Workspace, VPN, firewall admin, remote desktop tools, cloud admin accounts, finance logins, and payroll systems.

2. Create a verification rule for money and sensitive changes

Any request involving a wire, ACH change, payroll update, password reset, MFA reset, or gift card purchase should require a second channel of verification. Not the same email thread. Not the same chat thread. A separate phone call or other trusted method.

3. Tighten your patching rhythm for internet-facing systems

Browsers, firewalls, VPN appliances, remote access tools, WordPress plugins, and public-facing apps need a much shorter patch window than most teams currently give them.

4. Get rid of shared accounts and stale access

Shared logins make accountability muddy, and old contractor accounts become gifts to attackers. Clean these up now.

5. Train your team on modern phishing, not just old phishing

Show people examples of callback phishing, QR-code phishing, fake help desk requests, AI-polished executive impersonation, and fake vendor payment updates.

6. Test backup restores, not just backups

A backup that has never been restored under pressure is not a strategy. It is a hope.

7. Lock down remote access

If remote desktop, VPN, admin panels, or remote support tools are exposed too broadly, shrink the attack surface. Restrict by role, by IP where possible, and with MFA.

8. Use a password manager and unique passwords everywhere

Credential reuse is still one of the oldest and ugliest gifts we keep handing attackers.

9. Establish a “Pause Protocol”

Give staff explicit permission to slow down when a request feels urgent, strange, secretive, or unusually flattering. That one cultural shift stops more damage than many teams realize.

10. Treat AI like a new hire, not an infallible expert

Microsoft explicitly recommends human oversight and review around AI outputs.1 That applies internally too. If your staff are relying on AI to triage messages, summarize tickets, or assist with operations, review the workflows so your own tools do not become part of the problem.

Where We Go From Here

What if the real takeaway is not that AI changes everything overnight, but that it makes weak security hygiene much more expensive to ignore?

That is where we are now.

The good news is that we do not need perfect security to become meaningfully more resilient. We do need to take the basics seriously, practice them consistently, and stop treating cybersecurity like a side project we will get to after the next fire drill.

That is exactly the kind of work we step into with our clients at Solanasis: helping organizations make the fundamentals solid, so one believable phishing email or one neglected system does not spiral into a full-blown operational crisis.

We are immensely grateful to be of greatest service in a time like this.


Option B — More Entertaining, Story-Driven, Regular-People Friendly

AI Is Giving Hackers a Better Costume

Most cyberattacks do not start with some hooded genius typing furiously in a dark basement.

They start with something that looks normal.

An email that seems routine. A voicemail that sounds urgent. A login page that looks close enough. A message that appears to come from your boss, your vendor, your bank, or your IT person on a day when everyone is already moving too fast.

That is why AI matters here.

Not because it has turned every attacker into a wizard, but because it has given them a far better costume.

The New Mask

For years, one of the easiest ways to spot a scam was that it sounded off. The grammar was weird. The phrasing was clunky. The urgency felt sloppy. The fake website looked a little crooked.

AI has changed that.

Now a scammer can generate cleaner emails, more believable scripts, better translations, fake executive messages, fake invoices, fake support replies, and even voice-based social engineering with far less effort. Microsoft says AI can automate phishing campaigns, generate deepfakes, and help with vulnerability discovery and malware generation, while CrowdStrike reported a 442% surge in voice phishing between the first and second half of 2024.12

In plain English, the scams are getting smoother.

Why Phishing Still Works So Well

Phishing works because it hijacks something very human: trust mixed with urgency.

Attackers know that people are busy, overloaded, distracted, polite, and often trying to be helpful. So the lure does not need to be perfect. It just needs to feel believable enough for long enough.

And now AI helps them produce that believable-enough feeling at scale.

APWG observed more than 1.13 million phishing attacks in Q2 2025, and also noted rising business email compromise activity, including a sharp increase in the average amount requested in wire-transfer scams.3

So yes, phishing is still alive, and yes, it is evolving.

Then Comes the Real Damage

Once someone clicks, logs in, approves the wrong thing, or lets the wrong person in, the attackers usually do not rush. They look around. They gather credentials. They learn the environment. They find backups, admin tools, security gaps, old accounts, and exposed systems.

Then comes the decision: steal the data quietly, extort the victim, encrypt systems, or combine all three.

This is why ransomware is so painful. The bill is rarely just the ransom. Verizon says ransomware is present in 88% of SMB breaches, and Sophos puts average recovery cost in 2025 at about $1.5 million.45

So by the time the ransom note appears, the expensive part has often already begun.

What AI Really Changes

AI changes speed, polish, and scale.

It helps attackers write faster, research faster, scan faster, impersonate faster, and test more approaches with less effort. It lowers the barrier for some bad actors, and it gives already-skilled attackers more momentum.

That is why weak security basics are more dangerous now than they were even a few years ago.

Pro Tips That Make a Real Difference

Here is the encouraging part: there are practical moves regular organizations can implement right away.

  • Protect email first. If your email gets compromised, everything else gets easier for the attacker.
  • Require MFA. Especially for email, finance, admin accounts, cloud apps, and remote access.
  • Verify money requests out of band. A separate call can save enormous pain.
  • Shorten patch timing. Old internet-facing systems are low-hanging fruit.
  • Show staff what modern phishing looks like. Not just fake package emails, but callback scams, fake tech support, QR-code phishing, and executive impersonation.
  • Use a password manager. Unique passwords still matter more than many people want to admit.
  • Test restores. Backups are only real when recovery works.
  • Remove stale accounts. Old users and forgotten vendors create silent risk.
  • Give people permission to question urgency. A healthy pause is a security control.

The Bigger Invitation

What if resilience is not mainly about buying shinier tools, but about finally getting serious about the basics we already know matter?

That is the invitation in front of us.

We do not need panic. We do not need techno-drama. We do need stronger habits, clearer guardrails, and a culture where people know how to slow down before a believable lie turns into an operational mess.

That is the kind of resilience we help cultivate at Solanasis.

Thank you Life for letting us step into this work.


Option C — Sharper, More Urgent, Still Accessible

The Cost of Weak Security Just Went Up

If your organization is still loose with passwords, patching, remote access, backups, and staff training, this is not the time to assume you will skate by.

Hackers are using AI to improve phishing, impersonation, malware work, exploit research, and social engineering. Microsoft says AI can automate phishing campaigns, generate deepfakes, and support vulnerability discovery and malware generation.1 CrowdStrike says AI-driven phishing and impersonation helped fuel a 442% increase in voice phishing between the first and second half of 2024.2

So no, the threat is not theoretical.

It is already here, and it is hitting the same organizations that still have the same avoidable gaps.

Phishing Is Still the Front Door

Most attacks do not begin with brilliance. They begin with access.

A fake login page. A fake invoice. A fake Microsoft alert. A fake tech support call. A fake executive request. A QR code that leads to a credential-harvesting page. APWG reported more than 1.13 million phishing attacks in Q2 2025 and said phishing activity had been rising steadily over the prior year.3

If your team still thinks phishing means an obviously fake email from a cartoonish scammer, they are behind.

AI Makes Attackers More Efficient

AI is a force multiplier.

It helps attackers write better lures, translate them cleanly, spin up more variants, summarize stolen information, debug code, and move faster through reconnaissance and exploit development. Microsoft’s 2025 Digital Defense Report explicitly maps AI to automated spearphishing, reconnaissance, code generation and debugging, exploit development, malware generation, and domain impersonation.1

That means more pressure on organizations that are already under-defended.

Ransomware Is Still a Business Model

The reason weak basics matter so much is that attackers know how to monetize access once they get in.

They steal credentials. They escalate privileges. They move laterally. They steal data. Then they extort, encrypt, or both.

Verizon says ransomware appears in 88% of SMB breaches.4 Sophos says average recovery cost in 2025 was about 820 million in 2025, even as aggregate payments fell and claimed attacks rose 50%.6

In other words, the machine is still very alive.

What Clients Can Implement Right Now

If we want fewer “How did this happen?” moments, these are the moves:

  1. Turn on MFA everywhere that matters. Email, admin accounts, cloud platforms, payroll, finance, VPN, firewalls, and remote support tools.
  2. Require second-channel verification for money or identity changes. Wire changes, MFA resets, payroll changes, password resets, and urgent purchases should never rely on a single message.
  3. Patch internet-facing systems faster. Firewalls, VPNs, browsers, public apps, WordPress plugins, and remote tools should not sit stale.
  4. Train on current scams. Voice phishing, callback phishing, QR-code phishing, help-desk impersonation, and AI-polished business email compromise are here now.
  5. Use unique passwords and a password manager. Stop letting one compromised password become ten compromises.
  6. Review who still has access. Old staff, old contractors, old vendors, and forgotten service accounts all matter.
  7. Practice recovery. Do not just back up. Restore, test, and document.
  8. Reduce exposed remote access. Tighten what is public, what is admin-only, and who can get in from where.
  9. Create a culture where people can pause. Urgency is one of the attacker’s favorite tools.
  10. Audit how your own AI tools are used. If staff are feeding sensitive data into tools or trusting AI outputs too blindly, that creates another layer of risk.1

Bottom Line

AI did not invent ransomware. It did not invent phishing. It did not invent exploit development.

What it did was make all of those easier to scale, easier to polish, and easier to deploy against organizations that are still underestimating the value of the basics.

That is why the penalty for weak security has gone up.

At Solanasis, we help organizations make the fundamentals solid before a preventable issue turns into a full-blown operational crisis. That work matters more now than ever.

We really appreciate the organizations that are willing to step into this work before the pain makes the decision for them.


Suggested Titles Across All Versions

  • Hackers Are Using AI to Make Old Problems More Dangerous
  • AI Is Giving Hackers a Better Costume
  • The Cost of Weak Security Just Went Up
  • Why AI Makes Phishing and Ransomware More Dangerous for Everyone Else
  • Hackers Don’t Need Magic. They Just Need Your Basics to Be Weak

Suggested Subheads

  • AI is helping attackers make phishing, malware, and exploits faster, cheaper, and more believable — which means the fundamentals matter more than ever.
  • The danger is not futuristic; it is practical, scalable, and already hitting organizations with weak security basics.
  • The old attack paths are still here. AI is just helping bad actors move through them faster.

Source Notes

Editorial Notes

  • For a website blog post, Option A is likely the best fit.
  • For LinkedIn or email newsletter adaptation, Option B is easiest to trim.
  • For a sharper thought-leadership or founder-opinion piece, Option C has the strongest edge.
  • Before final publishing, it is wise to re-verify any time-sensitive stats.

Footnotes

  1. Microsoft, Microsoft Digital Defense Report 2025 and related 2025 summary materials. Key points used here include AI-automated phishing, deepfakes, vulnerability discovery, malware generation, code generation/debugging, exploit development, and the recommendation to pair AI with human oversight. Sources: Microsoft main report page and report PDF. 2 3 4 5 6 7 8

  2. CrowdStrike, 2025 Global Threat Report. Reported a 442% increase in voice phishing (vishing) between H1 and H2 2024 and highlighted GenAI-powered social engineering. 2 3

  3. APWG (Anti-Phishing Working Group), Phishing Activity Trends Report, Q2 2025. Reported 1,130,393 phishing attacks in Q2 2025, a rise from Q1, along with growth in BEC and QR-code-related phishing activity. 2 3

  4. Verizon, 2025 Data Breach Investigations Report SMB Snapshot. Reported ransomware as a component of 88% of SMB breaches. 2 3

  5. Sophos, State of Ransomware 2025. Reported average recovery cost of about $1.5 million. 2

  6. Chainalysis, 2026 Crypto Crime Report / ransomware analysis. Reported approximately $820 million in on-chain ransomware payments in 2025, while claimed attacks rose.