Thursday, October 21, 2021

Transparent Snapshots - Frequent, Easy, Low cost VMware backups!

BackupRecoveryGuy: Transparent Snapshots - Frequent, Easy, Low cost VMware backups!

Transparent Snapshots is breaking technology for VMware backup and recovery.

In short Transparent Snapshots:

  • Decreases backup costs
    Eliminates any additional serves, components, networking, etc.
  • Allows to backup Virtual Machines even every 15 minutes
    Huge fast backups
  • In case of ransomware, we can restore to very rcent point of time
  • Backup is super easy
    We can be setup in 3 minutes
  • Transprent Snapshots can be implemented for tens or thousands of Virtual Machines the same way
  • Tranpsarent Snapshot is technology fully supported by VMware
Please find below the video describing the  Transparent Snapshots technology with licensing and demo.
Direct link to the video:

0:00 What do we require from vmware backups?
1:04 Agenda 1:24 VADP issues and history 4:08 - RP4VM bypases VADP the problems 4:40 - What does give us Transparent Snapshots? - always available - simple - no proxy - very fast - no snapshots of vmware - we do not load or pause vms 6:14 - Architecture of Transparent Snapshots 10:30 - Value of Transparent Snapshot 13:35 - PPDM - backup solution that implements Transparent Snapshots 14:50 - PPDM - self backup architecture 17:10 - Why Transparent Snapshots? No proxy, directly backup from ESX - cost savings, no impact on production 20:00 - Why PPDM? 21:44 - Transparent Snapshots requirements 22:04 - Comparison betwenn VADP and Transparent Snapshots 22:50 Licensing 26:00 Demo Policy, self backup Transparent snapshots HUGE FAST backups - 23 seconds - the real time of backup! 31:50 Summary / value of Transparent snapshots

Fast, easy and always successful backups and restores!
Daniel Olkowski

Monday, September 6, 2021

Cyber bunker - architecture, network, recovery - remote discussion, 30 minutes, Friday, September 10, 12:00 Vienna time

Register for the discussion!

Cyber bunker provides recovery after hacker / ransomware attack, allows verification of production, is fully automated.
That is why it is gaining more and more popularity.

   * How does cyber bunker look like?
   * How is connected with the rest of the world?
   * What happens in the bunker when the attack starts?
And the most important: What is the cost?

Let me invite you for 30-minute journey into inside of the vault:
    Cyber bunker - architecture, network, recovery
    Friday, 10th of September, 12:00pm CEST (Vienna time) – 30 minutes
Internet invitation:

We will see cyber bunker construction, networking, TCO.
And ... we will simulate the attack!

Before our meeting have a look on video about Cyber Recovery:
And article:

Prepare coffee, questions and ...
See you in the bunker!
Daniel Olkowski

Friday, September 11, 2020

How to protect virtualizations? - Red Hat, oVirt, KVM , Xen, Oracle, Nutanix, Proxmox

Direct links to the above discussion about protection of virtualizations like Red Hat, oVirt, KVM , Xen, Oracle, Nutanix, Proxmox:

Here are the links to particular topics from the conversation:

What is vProtect?

vProtect architecture
1 phase backup with Data Domain with source de-dup
Server and node

Installation of vProtect

vProtect demo

Backup of images
Snapshots kept on the virtualizations
We can restore just the state of VM
Self backup infrastructure

Recovery plan
One button recovery
Scheduled recoveries

Backup cinsistency

Data Domain integration
Source de-duplication

Huge speed customer example

vProtect and source de-dup in the same license bundle

Summary of functionalities
Full backups
Incremental backups with CBT
Restoring single file
Mounting disks from backups to virtualixzation paltform
Disk exclusions
Self backup environment
Pre and post commands
File level restore - question

backup on demand
Starting bckups
Creating automated policy
Creating recovery plan

Great CLI interface


How vprotect is licensed?

Protect everything!

Thursday, August 27, 2020

Data Domain 7.2 - What is new?

Performance & Security - these are flagship Data Domain features.
They are further improved in new Data Domain 7.2.
Let me invite you for short discussion about new DD 7.2 features.

Direct link to the above discussion about new DD7.2 features:

What is new in Data Domain 7.2?
Compliance - absolute lock for protected data

Data Domain allows you to absolutely block backups for a specified period of time. Thanks to this, neither ransomware nor hacker can hurt our data - we can always restore from Data Domain:

Data Domain blocks data removal / change for defined period of time (month / quarter / …) Hacker cannot change to bypass the lock - neither directly nor through the NTP server:

Version 7.2 further extends the flexibility of compliance. We can define the maximum time change applied to Data Domain and the maximum number of changes:

The compliance function is available in Data Domain hardware appliances:

Further extension of Compliance (no removable lock) is Cyber Bunker:
described in the article:

How much more space do we gain with new Data Domain models?

New Data Domain models compress new blocks more effectively thanks to dedicated compression card:

And we also gain more performance:

How much do we additionally gain thanks to compression card? The video shows the real numbers from production, worldwide Data Domain:

Let's assume that the old Data Domain would need 100TB for data storage.
New models (DD6900 / 9400/9900) will need
  • only 77TB for storing backups for securing your file system
  • only 85TB for storing backups for securing databases
Detailed explanation of the mechanism that makes new Data Domains to use less space for backups:

BoostFS = huge fast backup speed thanks to source de-duplication
For everyone!

BoostFS Live:

Data Domain 7.2 provides even faster BoostFS backups:

DD 7.2 increases the already super performance of BoostFS - - Real data from SQL backups:

Recovery performance

Each recovery stream is split into multiple threads to speed up recovery!
As a result, even if we have Data Domain with a small number of disks, we achieve a big speed:

Further optimization of Garbage Collection algorithms.

Data Domain provides huge performance – regardless if it performs internal processes or not. 
Thanks to this, it has over 50% market share:

How to set Data Domain cleanup parameters - live:

Data Domain - algorithmic device:
focused on performance:
Possibility to extend compliance for ever

Data Domain allows for fast, frequent backups

No visibility of source de-dup for production:

The video is recording for customers/partners discussion.
Thank you very much all that took place in this event live!
Thank you for time, discussion and... fun!!!

Performance and Security!

#datadomain #dd #7.2 #dd7.2 #backup #backupperformance #recovery #recoveryperformance #dataprotection #security #datasecurity #news #backupmedia #media #boostfs #boost #deduplication #de-dup

Wednesday, January 8, 2020

Ransomware attack - how can we recover?

Whatever we do in our life, it is always good to have Plan B.

What if my data are encrypted by ransomware?
Do I have Plan B?
How can I access my data?

What if I have backups that have no chance to be ransomwared/ removed?
What if I have a cyber bunker?
What if I have Cyber Recovery plan?
And... can it be with very attractive cost?

What backup/recovery solution can offer me in case of ransomware/hacker attack?
Why Plan B is important?

How can I lock against ransmoware/hacker?
How can I be sure that none can remove /change my backups.
Can I restore in case of ransomware/hacker attack.

Bunker in IT? - Why, When, How?
Can we do everything automatically?
Can we control our IT over ransomware?
Can we recover immediately?

All methods to protect against ransomware/hacker

Hardening - let's remember about this simple approach

Data Domain Snapshots 
No cost, no additional space, no performance degradation method to have protection of our backups

Why source de-duplication increases my security?

We can protect ourselves against ransomware/hacker
We can have plan B
We can make it easy and automated...

Let's consider it...

Presentation about Cyber Recovery (the one used in video):

Only successful recoveries!

Monday, December 9, 2019

Why new Data Domain models? - DD6900 / DD9400 / DD9900 backup/recovery appliances

In September 2019 Dell announced 3 new Data Domain models (DD6900 / DD9400 / DD9900)
Shall we take under considerations new Data Domain models?

If so why?
What is the architecture?

Below is the summary of changes in new Data Domain models:
  • DD6900
  • DD9400
  • DD9900

Automated protection against ransomware / hacker.
Any backup performed on new Data Domain models is automatically protected for defined period.
Neither ransomware nor hacker can delete it.
How is it possible?

Reducing disk space required for backups
New models require 10-30% less space for storing backups comparing to previous ones.
Wow! Can it be?

Starting even 64 Virtual Machines from new Data Domains with 60 000 IOps performance!
Hugh, can backup be faster than production?

Faster restores from new Data Domain

Capacity on demand
In new Data Domain models we can get more space but pay for the one that we use.

Faster internal components
Does it matter?

Scalability from 48 TB to 1 250 TB

100Gb Ethernet card is possible!

Very little space required Can you store 1.25PB in single rack?

Monitoring and visualizing online all its components

How can it move to new models?

New Data Domain models have a number new & exciting things.
It makes sense to have a closer look on them...

Presentation about new Data Domain models (the one used in video):

Only immediate recoveries!

Wednesday, August 21, 2019

How to keep long term backups? - Tiering or... something simple!

Recently, customer asked me the question:
      I’m looking for a discussion document which outlines best practice for tiering backups.
This question comes for ages.

Idea of tiering in backup comes from our internal dream:
Let’s have fast, secure and good media (storage) for recent backups/data
but at the same time
Let’s keep old backups as cheap as possible

Recent backups we usually keep on Data Domain (
Data Domain gives us huge performance, security, and restore.
This is what we require from protection production: no load, fast backup and easy, fast restore.

For older backups copies (archive backups) we really do not know what to do?
We just need old backups to keep cheap...

We have couple of options for storing old backups:
Object storage (local/cloud)
Or maybe…

Why not just keep 5 years backup on Data Domain?
Local Data Domain / Data Domain in cloud – as one wishes.

During one of the workshops I asked customer:
       How long do you keep backups on Data Domain, how long on tape?
He said:
       Daniel, I have no time and money to play with backup tiering.
       I have bought 2 Data Domains (little bigger) and I just replicate between 2 cities.
       I have 90 copies on each Data Domain (30 daily for last month and 60 monthly for last 5 years)
       No touch, same cost, dream life.

My experience (I would choose it for myself) is that little bigger DD - for both production backups and archive backups - in many cases is better than tiering.
Money are similar, simplicity is incomparable.

Of course, every case is different, but…
Most simple solutions are the best ones!

And let me quote my great friend from UK:

When I started dealing with backup I was said that the more we retain on Data Domain, the better de-duplication is.
And that is so true…

The current push for tiering backups in reality makes more problems rather than solving any.
Tiering backup is:
  • Expensive
  • Complicated because of restore 
  • And many other compatibility issue
Why keeping last backups on great media (like Data Domain) and moving older backups to tape/object provides complication, less flexibility, additional costs?

With tiering in backup, we have 2 different media, software, policies to manage those 2 different backup media?

Why do we do that all complication?
To make backup cheaper.

But… Even hard costs of tiering like additional storage (second tier) + potential licenses for tiering in many cases eat that whole difference in price between 2 different media.

Not mentioning soft costs like management, integration, know how, …
This is what my customer just said from his experience.

So having fantastic media like Data Domain, we can just increase a little bit its space – not much due to de-duplication – and we can enjoy f.e. 5 years retention!
With no problems, no management, easy.
And with similar or even less money…

And... Data Domain can be in the cloud using Object Storage as the space for backups.
But this something for another article...

Most simple solutions are the best ones!

Only successful recoveries….