Data Domain is well know as unbeatable security / performance / easy of use. Can new Data Model model - DD64400 - can add anything? Why the values of Data Domain matters? Have a look on short /detailed video! https://youtu.be/8ieVepB2kno
Detailed description of topics covered in the video: 0:05 Customer story https://www.youtube.com/watch?v=8ieVepB2kno&t=5s - customer encrypted wants to protect for future using Data Domain - New DD6400 provides security, performance, scalability
3:40 Parallel backup of all items in production environment https://www.youtube.com/watch?v=8ieVepB2kno&t=220s - Virtual machines / databases - 270 streams - comparison to concert and toilet - backup environment - one of the best to perform parallelly
5:58 Why additional speed for backup? https://www.youtube.com/watch?v=8ieVepB2kno&t=358s - Data Domain does not write data during backup - We need the speed for recovery - There are other mechanisms as defragmentation
7:10 Data Domain speed from de-duplication https://www.youtube.com/watch?v=8ieVepB2kno&t=430s - variable length de-duplication - example with 3 Virtual Machines built from 3 blocks - few space required - new yellow block and compression required - DD6300 - compression by main processor - DD6400 - compression card that performs the compression - processor is free for replication, cleaning, etc. - blocks are smaller - better compression - faster backup / restore but also the whole data domain is faster! - all new Data Domains: DD6400 / DD6900 / DD9400 / DD990 have the compression card
10:00 Starting backups directly from Data Domain! https://www.youtube.com/watch?v=8ieVepB2kno&t=600s - Instant Access - We want to restore immediately Virtual Machine - We want Data Domain to be primary storage - Can we start Virtual Machine from tape? IT would be like asking kid to empty the trash - SSD disk speed
11:50 Revolution in backup https://www.youtube.com/watch?v=8ieVepB2kno&t=710s - normally if something fails - we are offline until backup will restore data - long time - now if production fails, backup becomes the production - example with customer who was running 75 VMs directly from Data Domain - failure of firmware upgrade in disk array
12:30 Security https://www.youtube.com/watch?v=8ieVepB2kno&t=750s - checking backups on the fly - whatever backup software we have - all data are red and checked if correct - guarantee of recovery correct data
16:48 Replication between Data Domains https://www.youtube.com/watch?v=8ieVepB2kno&t=1008s - whole site can be damaged - we transfer 1% of the data - we can restore 100% of data - case study - 29 locations replicated to central site - customer wants only Data Domain for Disaster Recovery base on the great experience - almost no transfer, no network usage, reliability
18:12 Demo of Data Domain compliance https://www.youtube.com/watch?v=8ieVepB2kno&t=1092s - we setup lock for 3 months - for whatever backup software we have secure backups against ransomware / hacker attack
22:00 Retention Lock / Compliance with Veeam Backup Software https://www.youtube.com/watch?v=8ieVepB2kno&t=1320s - Veeam does not force automatically Compliance - Cyber Bunker is solution for Veeam - Lock is than based on Data Domain in Cyber Bunker - Possible to make lock on production - snapshot which is locked
24:05 If we replace disks with Operating system what about configuration? https://www.youtube.com/watch?v=8ieVepB2kno&t=1445s - data on Data Domain are self-described - we can install new Operating system and attach to Data Domain
24:52 NetWorker - how to setup retention lock on Data Domain set-up from NetWorker level https://www.youtube.com/watch?v=8ieVepB2kno&t=1492s - for any backup software we can setup retention lock using Cyber Bunker
26:00 Capacity / Scaling https://www.youtube.com/watch?v=8ieVepB2kno&t=1560s - DD6400 starts from 8TB net and scales up to 172TB - we can increase capacity in 4TB increments - other models starts from much bigger capacity apart from DD3300 - DD6400 - we start from 8TB and we increase capacity with just entering license - 4TB increments - increase capacity with entering licensing is possible up to 32TB - 40TB of net capacity requires adding hardware - new shelf - than we increase capacity again but adding just license - we can say DD6400 provides "capacity on demand" - ordering DD6400 we just provide net capacity with 4TB increments
29:05 Data Domain - best market de-duplication https://www.youtube.com/watch?v=8ieVepB2kno&t=1745s - small variable block with global de-duplication - it writes almost no data - if we backup guest level and image level the second backup writes almost no data -> global de-duplication - source de-duplication / Virtual Synthetic - 2-3 less space than any other competition because of global de-dup / variable & small block de-duplication
29:57 Customer story and Data Domain global de-duplication https://www.youtube.com/watch?v=8ieVepB2kno&t=1797s - 9TB VMware - first backup occupied 0.6TB (600GB) - almost nothing - 1:15 de-dup with just 1st backup - many products have issue to overcome 1:8 de-duplication with over 30 days retention - here it was just after 1st backup - Data Domain has 1:100 to 1:300 de-duplication with 30 days retention - this makes Data Domain cheap!
31:52 Compression card https://www.youtube.com/watch?v=8ieVepB2kno&t=1912s - we can even further decrease the cost! - example with VMs built from 3 blocks - if Data Domain sees a new block he must write to disk - Data Domain will compress new block further - DD6300 - lz algorithm for new blcosk - lz not very effective but very little impact on DD processor - DD6400 - compression of new blocks is made by compression card - much faster - better compression, much less space is required(gz or gzfast)
33:40 How less space DD6400 takes over DD6300? https://www.youtube.com/watch?v=8ieVepB2kno&t=2020s - Migrating from DD63300 100TB to DD6400 - DD6400 will require only 80TB space - reason -> lz compression of new blocks vs gz/gzfas on DD6400 - these number are from worldwide statistics - detailed stats for files / databases - compression card is in DD6400 / DD6900 / DD9400 / DD9900
35:45 Customer example tat migrated from DD4200 (no compression card) to DD6900 (with compression card https://www.youtube.com/watch?v=8ieVepB2kno&t=2145s - Space dropped from 27TB (DD4200) to 20TB (DD6900)
36:42 - Parameters of DD6400 https://www.youtube.com/watch?v=8ieVepB2kno&t=2202s - part of new models family - DD6900 / DD9400 / DD9900 - DD6400 capacity from 8TB to 172TB net with increments of 4TB - capacity of other models - DD6400 uses 8TB drives - DD6400 has 270 simultaneous backups -> huge important - streams of other models - DD6400 is 2U / 3U shelf - max 2 shelves - DD6400 built in Ethernet card 10Gb -> SFP/BasetT to choose - DD6400 allows for 3 additional Ethernet cards - the same as above or 25Gb - DD6400 allows for 1 additional FC card
Millions of files? Huge number of TBs? Slow backup? Issue with restore?
There is a solution! You can backup any file share fast, secure and with easy restore! https://youtu.be/LETxuJZXnuA See summary of what PowerProtect Data Manager offers for backup of CIFS/NFS - any NAS shares!
0:50 - Why do we talk about backup of files? https://www.youtube.com/watch?v=LETxuJZXnuA&t=50s - we talk about CIFS/NFS that is tough to backup - no agent possible as for Windows NTFS use case - we have millions of files / hundreds of TB - backup is slow - even days - restore is slow, restore point is long time ago
3:29 - Architecture of the backup file share https://www.youtube.com/watch?v=LETxuJZXnuA&t=209s - we have FULL backup by only reading what has changed since last backup - backup is huge fast because we read only delta - PPDM performs only FULL backups - though reads only delta - no incremental / differential backups - only FULLs - huge important because FULL backups provides fastest restore and possibility for granular restore
6:40 - Value https://www.youtube.com/watch?v=LETxuJZXnuA&t=400s - no load of protected system / protected system does not feel the backup - no load of network - protected share can be far from media - backup is huge fast / I can perform backups frequently
8:12 - What does t mean that every backup is full but we read delta? https://www.youtube.com/watch?v=LETxuJZXnuA&t=492s - if 2 files have changed, we will read just them - we have a full backup - this is magic and incredibly fantastic
10:45 - PowerProtect Data Manager demo https://www.youtube.com/watch?v=LETxuJZXnuA&t=645s - PPDM just controls backups - does not take part in data movement - we add a new policy for CIFS backups - we choose share - we can choose synthetic full - full with reading delta
12:35 - Automated dividing for streams / slices https://www.youtube.com/watch?v=LETxuJZXnuA&t=755s - dynamic slicing - every 200 GB is single stream - 3 streams for 600 GB data - every 1 million of files is single stream - 5 streams for 5 millions of slices
14:15 - demo - setup maximum number of streams for particular share https://www.youtube.com/watch?v=LETxuJZXnuA&t=855s - we do not want to use to many streams for particular streams - limiting streams for a large number of TBs
16:15 - The backup is controlled by proxy https://www.youtube.com/watch?v=LETxuJZXnuA&t=975s - PPDM controls backups from aside - proxy is installed automatically from PPDM - I can have many proxy - every of them is 24 streams
17:26 - This is functionality for all possible CIFS/NFS shares https://www.youtube.com/watch?v=LETxuJZXnuA&t=1046s - any CIFS any NFS on whatever storage - hardware agnostics - we can enjoy huge speed - it is just software / proxy - proxy is just Virtual Machine - we can add a new proxy / VM to add streams - 24 streams per proxy
19:38 - No agent required https://www.youtube.com/watch?v=LETxuJZXnuA&t=1178s - agentless 100% - we do not install anything - also on the backed up share - adding a new share - we will se no agent is required - showing how simple is adding a new share to PowerProtect Data Manager
22:07 - Proxies are deployed manually https://www.youtube.com/watch?v=LETxuJZXnuA&t=1327s - we shall have more calculate how many streams we require - if we need 50 streams probably we need 3 proxies (x24 - 72 streams) - to have some buffer
23:02 - Protection against ransomware / hacker attack https://www.youtube.com/watch?v=LETxuJZXnuA&t=1382s - we can setup retention lock - demo - setup retention lock live in PPDM - production can be deleted but our backup will survive
24:17 - Self-backup infrastructure https://www.youtube.com/watch?v=LETxuJZXnuA&t=1457s - demo of self-backup infrastructure - when we create policy we need to manually say which shares we want to backup - but we want that PPDM automatically add new shares connected with PPDM - showing dynamic filters live - all shares that in the name contain k8s must be automatically backed up - we see that now PPDM will backup 4 shares having k8s within name - we can also say that big shares go to one policy / small shares go to another policy - all backups from some Isilon shall go somewhere - Self backup infrastructure is available only for Unity / PowerStore / PowerScale
27:35 - What is Self-backup infrastructure? https://www.youtube.com/watch?v=LETxuJZXnuA&t=1655s - PPDM asks NAS and asks if there are new shares - if this no name share, PPDM cannot scan for new shares, no one to ask for new resources to backup
29:00 PPDM supports any CIFS / NFS https://www.youtube.com/watch?v=LETxuJZXnuA&t=1740s - huge fast backup for any share - for self-backup infrastructure we need supported system to ask about new resources
29:34 - Do we need any post-backup for always FULL? https://www.youtube.com/watch?v=LETxuJZXnuA&t=1774s - NO any post backup actions required - We have FULL backups like that! - Please test it and touch it on your own
30:15 Restore https://www.youtube.com/watch?v=LETxuJZXnuA&t=1815s - we have many backups of different shares - backups of NetApp / Qnap - we can restore to the same NAS - we can restore to Linux / Windows - we can restore to any target - we do not use NDMP
32:38 Summary https://www.youtube.com/watch?v=LETxuJZXnuA&t=1958s - Great solution that can help with time, many files, large volumes - Very cost effective - Look more closely - Future - show the numbers - I am very excited about technology - See you next time!
1:04 Agenda
1:24 VADP issues and history
4:08 - RP4VM bypases VADP the problems
4:40 - What does give us Transparent Snapshots?
- always available
- simple - no proxy
- very fast
- no snapshots of vmware
- we do not load or pause vms
6:14 - Architecture of Transparent Snapshots
10:30 - Value of Transparent Snapshot
13:35 - PPDM - backup solution that implements Transparent Snapshots
14:50 - PPDM - self backup architecture
17:10 - Why Transparent Snapshots?
No proxy, directly backup from ESX - cost savings, no impact on production
20:00 - Why PPDM?
21:44 - Transparent Snapshots requirements
22:04 - Comparison betwenn VADP and Transparent Snapshots
22:50 Licensing
26:00 Demo
Policy, self backup
Transparent snapshots
HUGE FAST backups - 23 seconds - the real time of backup!
31:50 Summary / value of Transparent snapshots
Fast, easy and always successful backups and restores! Daniel Olkowski
Cyber bunker provides recovery after hacker /
ransomware attack, allows verification of production, is fully automated. That is why it is
gaining more and more popularity.
But * How does cyber bunker look like? * How is connected with the rest of the world? * What happens in the bunker when the attack starts? And the most important: What is the cost?
Direct links to the above discussion about protection of virtualizations like Red Hat, oVirt, KVM , Xen, Oracle, Nutanix, Proxmox: https://youtu.be/Ym7_GNcB0qI
----------- Here are the links to particular topics from the conversation:
1:04 What is vProtect?
1:40 vProtect architecture 2:55 1 phase backup with Data Domain with source de-dup 4:11 Server and node 5:15 Scalability
5:57 Installation of vProtect
7:51 vProtect demo
8:15 Functionalities 8:20 Backup of images 9:00 Snapshots kept on the virtualizations 11:00 We can restore just the state of VM 11:40 Self backup infrastructure
14:05 Recovery plan One button recovery Scheduled recoveries
16:15 Backup cinsistency
17:10 Data Domain integration Source de-duplication
17:46 Huge speed customer example
19:25 vProtect and source de-dup in the same license bundle
19:59 Summary of functionalities Full backups Incremental backups with CBT Restoring single file Mounting disks from backups to virtualixzation paltform Disk exclusions Self backup environment Pre and post commands File level restore - question
22:58 GUI backup on demand Starting bckups Creating automated policy Creating recovery plan
Performance & Security - these are flagship Data Domain features. They are further improved in new Data Domain 7.2. Let me invite you for short discussion about new DD 7.2 features.
Data Domain allows you to absolutely block backups for a specified period of time. Thanks to this, neither ransomware nor hacker can hurt our data - we can always restore from Data Domain:
Data Domain blocks data removal / change for defined period of time (month / quarter / …) Hacker cannot change to bypass the lock - neither directly nor through the NTP server:
Version 7.2 further extends the flexibility of compliance. We can define the maximum time change applied to Data Domain and the maximum number of changes: