Jump to content

@@@@ Official Storage (San/nas) @@@@@@


pavan_613

Recommended Posts

[quote name='30 yrs industry' timestamp='1355353174' post='1302941554']
Hmmm, let me try....

issue either NetApp filer or snap products or Microsoft cluster server (MSCS) lo undali....NetApp filer lo issue undatam chaala rare...only 1 particular backup fail avuthundi ante it cannot be a filer issue....

Snapdrive e version use chesthunnaru? upgrade chesi chudu...reboot avasaram ledu....then fail over test cheyi, bane fail over ayinda leda chudu....alage, emanna improvement undi emo chudu....most likely, undakapovachu.....safe side kosam, NetApp tho case open cheyi, but i would strongly recommend you to open a case with Microsoft....endhuku ante prathi vendor "cover my ass" policy follow avutharu, NetApp vadu emo ma issue kadu, Microsoft tho case open cheyi antadu, microsoft vadu emo filer ni blame chesthadu....

elopu event viewer ki velli, aa errors description chusthe koncham idea vasthundi, asalu em jaruguthundi ani...nuvvu provide chesina info ki naku thelisina trouble shooting steps cheppa...hope that helps...
[/quote]

cheers ba, netapp ki escalate chesamu vadu DAG config lo tappu undi ani oka pedda doc file pettadu how to ani!

Link to comment
Share on other sites

  • 4 weeks later...

Guys, Just want to update you on my progress with Netapp
new job bagundi, whole different world to what I have done before.
naku storage gurinchi peddaga teliyadu ani recruit chesetappude cheppakabatti no pressure, kinda cool nadustundi

How are guys anyway ?

Link to comment
Share on other sites

[quote name='computaboi' timestamp='1357993199' post='1303096747']
Guys, Just want to update you on my progress with Netapp
new job bagundi, whole different world to what I have done before.
naku storage gurinchi peddaga teliyadu ani recruit chesetappude cheppakabatti [b]no pressure, kinda cool nadustundi[/b]

How are guys anyway ?
[/quote]

gud 2 know that....nenu project lo join ayyi 2 months ayindi...kind of long project...e year end varaku undi....

Link to comment
Share on other sites

@ NetApp professionals

bhayya chinna help

My Volumes of interest: Filer: vol_ncv_db_unrestricted & vol_cmv_db_A_I
My Volume container: Filer: aggr0_fc300

I ran aggr show_space on ‘aggr0_fc300’ which returned statistics about individual volumes , however have noticed numbers reported by aggr show_space are lower than df –h do you know what will be reason for this please ?
Makes sense if it was the other way round as df doesn’t include metadata but when you run statistics command at aggr level I’ve read somewhere that it includes space occupied by metadata in the output .
I could be wrong could you please clarify.

Sample
aggr show_space aggr0_fc300

vol_mail3sg2_std_db 126GB 0GB volume
vol_cmv_db_A_I 25GB 0GB volume
vol_cmv_db_J_Q 25GB 0GB volume
vol_cmv_db_R_Z 25GB 0GB volume
vol_ncv_db_unrestricted 884gb 402gb volume


Filer> df -h vol_ncv_db_unrestricted
Filesystem total used avail capacity Mounted on
/vol/vol_ncv_db_unrestricted/ 880GB 812GB 67GB 92% /vol/vol_ncv_db_unrestricted/
/vol/vol_ncv_db_unrestricted/.snapshot 0KB 51GB 0KB ---% /vol/vol_ncv_db_unrestricted/.snapshot


filer> df -h vol_cmv_db_A_I
Filesystem total used avail capacity Mounted on
/vol/vol_cmv_db_A_I/ 25GB 15GB 9683MB 62% /vol/vol_cmv_db_A_I/
/vol/vol_cmv_db_A_I/.snapshot 0KB 50MB 0KB ---% /vol/vol_cmv_db_A_I/.snapshot

Link to comment
Share on other sites

[quote name='computaboi' timestamp='1358289277' post='1303113453']
@ NetApp professionals

bhayya chinna help

My Volumes of interest: Filer: vol_ncv_db_unrestricted & vol_cmv_db_A_I
My Volume container: Filer: aggr0_fc300

I ran aggr show_space on ‘aggr0_fc300’ which returned statistics about individual volumes , however have noticed numbers reported by [color=#ff0000][b]aggr show_space are lower than df –h[/b][/color] do you know what will be reason for this please ?
Makes sense if it was the other way round as df doesn’t include metadata but when you run statistics command at aggr level I’ve read somewhere that it includes space occupied by metadata in the output .
I could be wrong could you please clarify.

Sample
aggr show_space aggr0_fc300

vol_mail3sg2_std_db 126GB 0GB volume
vol_cmv_db_A_I 25GB 0GB volume
vol_cmv_db_J_Q 25GB 0GB volume
vol_cmv_db_R_Z 25GB 0GB volume
[color=#ff0000][b]vol_ncv_db_unrestricted 884gb [/b][/color] 402gb volume


Filer> df -h vol_ncv_db_unrestricted
Filesystem total used avail capacity Mounted on
[b][color=#ff0000]/vol/vol_ncv_db_unrestricted/ 880GB[/color][/b] 812GB 67GB 92% /vol/vol_ncv_db_unrestricted/
/vol/vol_ncv_db_unrestricted/.snapshot 0KB 51GB 0KB ---% /vol/vol_ncv_db_unrestricted/.snapshot


filer> df -h vol_cmv_db_A_I
Filesystem total used avail capacity Mounted on
/vol/vol_cmv_db_A_I/ 25GB 15GB 9683MB 62% /vol/vol_cmv_db_A_I/
/vol/vol_cmv_db_A_I/.snapshot 0KB 50MB 0KB ---% /vol/vol_cmv_db_A_I/.snapshot
[/quote]

Dude, aggr show_space are lower than df annav....kani it is the other way another
aggr show_space is 884GB and df is 880GB

Link to comment
Share on other sites

Naku thelisindi cheptha....confuse ayithe get back to me....


[b]aggr show_space [/b]displays the[b] space usage[/b] in an aggregate.
where as[b] df[/b] command displays [b]allocated space.[/b]..

telugu lo cheppali ante...aggr show_space command....aggregate lo volumes entha space use chesthunnayo chupisthundi...
df command...oka volume ki entha space allocate chesamo chupisthundi....okati emo entha ichamu, inkoti emo entha use chesthundi chupisthayi....

I personally use aggr show_space command
1) to display all the flexible volumes on that aggregate
2) say, aggr lo space issues vachayi anuko....e volume entha space occupy chesthundi chudataniki e command use chestha....

and df command emo volume perspective...i mean volume ki entha space allocate chesamu, entha use ayindi, entha free space undi thesukovadaniki use chestha,...ante volume lo space issues vasthe df command use chestha....

simply, aggr show_space emo aggr spaces issues and df emo volume space issues kosam use chestha....make sense?


in some cases, aggr show_space can be more than df....i mean used is more than allocated....and in some cases aggr show_space can be less than df....i mean used is less than allocated....what is the reason? answer is "how you provision your volumes, thin provisioned or thick....

thin provision ante allocate chesina dani kanna ekkuva use cheyachu kada....andhuke used (aggr show_space) is more than allocated (df)....thin provision kakapothe used will be less than allocated....anthe kada...neeku $10 isthe max. $10 e karchu chesthavu....how can u spend $11 if I give you $10....answer is...yes, u can do it with thin provisioning....

baga ardham avuthundi ani $ anna....confuse ayithe, replace $ with GB

confuse ayyava, ardham ayinda?

Link to comment
Share on other sites

[quote name='30 yrs industry' timestamp='1358307506' post='1303115094']
Dude, aggr show_space are lower than df annav....kani it is the other way another
aggr show_space is 884GB and df is 880GB
[/quote]


sorry I meant used space is ,
if you look at results from aggr show space
aggr show_space aggr0_fc300

Volume Total Used Guarantee
vol_cmv_db_A_I 25GB 0GB volume

paina example lo chuste total used space 0GB out of 25GB ani cheptundi

kani df -h on used space vachi different chupistundi adi ardam kaledu

sorry emi anukovaddu bhayya knai nuvvu chepindi ardam kale, nenu netapp ki new anduke struggling to understand ee concept

Link to comment
Share on other sites

[quote name='computaboi' timestamp='1358329215' post='1303115663']


sorry I meant used space is ,
if you look at results from aggr show space
aggr show_space aggr0_fc300

Volume Total Used Guarantee
vol_cmv_db_A_I 25GB 0GB volume

paina example lo chuste total used space 0GB out of 25GB ani cheptundi

kani df -h on used space vachi different chupistundi adi ardam kaledu

sorry emi anukovaddu bhayya knai nuvvu chepindi ardam kale, nenu netapp ki new anduke struggling to understand ee concept
[/quote]


Df and df -A, when used together, can help illustrate how much space is actually on an aggregate versus how much the volumes are using.
However, these commands can be misinterpreted and, occasionally, inaccurate.
The best way to show how much space is being used vs. being allocated is "aggr show_space"
This command will illustrate the accurate amount of space actually being used, regardless of guarantees.
aggr show_space with volume guarantee on for "test":
filer> aggr show_space aggr1 -h
Aggregate 'aggr1'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG
825GB 82GB 37GB 705GB 1180MB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
syncdest 50GB 214MB volume
test 100GB* 816KB volume
Aggregate Allocated Used Avail
Total space 150GB 215MB 554GB
Snap reserve 37GB 133MB 37GB
WAFL reserve 82GB 1207MB 81GB
*Note how the allocation of the volume "test" greatly differs from the "used".
aggr show_space with volume guarantee disabled for "test":
filer> aggr show_space aggr1 -h
Aggregate 'aggr1'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG
825GB 82GB 37GB 705GB 1180MB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
syncdest 50GB 214MB volume
test 868KB* 868KB none
Aggregate Allocated Used Avail
Total space 50GB 215MB 654GB
Snap reserve 37GB 133MB 37GB
WAFL reserve 82GB 1207MB 81GB
*Notice how "test" is showing only 868KB used - this is because the "20GB" inside of the volume is actually a lun with space reservations turned on - but no data inside of it. Additionally, note how the space allocated matches the space used. This is how the filer sees the space in a volume with no guarantee versus one with guarantee enabled.

e example ardham avvali ante "guarantee" ante ento theliyali....so first read about it...

Link to comment
Share on other sites

[quote name='30 yrs industry' timestamp='1358387434' post='1303121171']


Df and df -A, when used together, can help illustrate how much space is actually on an aggregate versus how much the volumes are using.
However, these commands can be misinterpreted and, occasionally, inaccurate.
The best way to show how much space is being used vs. being allocated is "aggr show_space"
This command will illustrate the accurate amount of space actually being used, regardless of guarantees.
aggr show_space with volume guarantee on for "test":
filer> aggr show_space aggr1 -h
Aggregate 'aggr1'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG
825GB 82GB 37GB 705GB 1180MB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
syncdest 50GB 214MB volume
test 100GB* 816KB volume
Aggregate Allocated Used Avail
Total space 150GB 215MB 554GB
Snap reserve 37GB 133MB 37GB
WAFL reserve 82GB 1207MB 81GB
*Note how the allocation of the volume "test" greatly differs from the "used".
aggr show_space with volume guarantee disabled for "test":
filer> aggr show_space aggr1 -h
Aggregate 'aggr1'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG
825GB 82GB 37GB 705GB 1180MB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
syncdest 50GB 214MB volume
test 868KB* 868KB none
Aggregate Allocated Used Avail
Total space 50GB 215MB 654GB
Snap reserve 37GB 133MB 37GB
WAFL reserve 82GB 1207MB 81GB
*Notice how "test" is showing only 868KB used - this is because the "20GB" inside of the volume is actually a lun with space reservations turned on - but no data inside of it. Additionally, note how the space allocated matches the space used. This is how the filer sees the space in a volume with no guarantee versus one with guarantee enabled.

e example ardham avvali ante "guarantee" ante ento theliyali....so first read about it...
[/quote]



hey thanks for taking your time to add your thoughts.
I understand 'Total space' of a volume from aggr show_space command output can be different from df -h number, but what I fail to understand is 'used space'

df -h vol_cmv_db_A_I
Filesystem total [color=#333333] used[/color] avail capacity Mounted on
/vol/vol_cmv_db_A_I/ 25GB [color=#ff0000]15GB[/color] 9875MB 61% /vol/vol_cmv_db_A_I/
/vol/vol_cmv_db_A_I/.snapshot 0KB [color=#ff0000] 116MB [/color] 0KB ---% /vol/vol_cmv_db_A_I/.snapshot

Aggr show_space -h and without -h
vol_cmv_db_A_I 25GB [color=#ff0000]349MB[/color] volume
vol_cmv_db_A_I 22142428KB [color=#ff0000]10048956KB[/color] volume

Lun show
/vol/vol_cmv_db_A_I/vol_cmv_db_A_I.lun 15.0g (16113323520) (r/w, online, mapped)

only explaination I could think of is, LUN space reserved but actual space used in lun is only 349MB ?
this vol has fractional reserve(FR) =100% , I understand FR 100% on a 25GB volume reserves another 25GB to allow overwrites but I am not sure where it takes that space from, I am confused now!

Link to comment
Share on other sites

[quote name='computaboi' timestamp='1358595976' post='1303136868']



hey thanks for taking your time to add your thoughts.
I understand 'Total space' of a volume from aggr show_space command output can be different from df -h number, but what I fail to understand is 'used space'

df -h vol_cmv_db_A_I
Filesystem total [color=#333333] used[/color] avail capacity Mounted on
/vol/vol_cmv_db_A_I/ 25GB [color=#ff0000]15GB[/color] 9875MB 61% /vol/vol_cmv_db_A_I/
/vol/vol_cmv_db_A_I/.snapshot 0KB [color=#ff0000] 116MB [/color] 0KB ---% /vol/vol_cmv_db_A_I/.snapshot

Aggr show_space -h and without -h
vol_cmv_db_A_I 25GB [color=#ff0000]349MB[/color] volume
vol_cmv_db_A_I 22142428KB [color=#ff0000]10048956KB[/color] volume

Lun show
/vol/vol_cmv_db_A_I/vol_cmv_db_A_I.lun 15.0g (16113323520) (r/w, online, mapped)

only explaination I could think of is, LUN space reserved but actual space used in lun is only 349MB ?
this vol has fractional reserve(FR) =100% , [b]I understand FR 100% on a 25GB volume reserves another 25GB to allow overwrites but I am not sure where it takes that space from, I am confused now[/b]!
[/quote]

s o r r y....was really busy from past few days....

space comes from aggregate...say if the space exceeds the size of aggregate....thats when we get an alert on DFM saying "over committed aggregate" and that is time we need to add disks....

if qtree is full what would you do? you will increase the quota...where is the space coming from....it is coming from volume...say if volume is getting full...what would you do? you'll either delete snapshots or increase the size of volume...where is the space coming from...it is coming from aggregate....if the aggregate is full...what would you do? you will add more disks....

Link to comment
Share on other sites

and I would strongly recommend you to go through the basics.....you can search "netapp back to basics" on google....say for example "netapp back to basics fractional reserve" will give all the basics of FR...read one topic a day...trust me...you can do wonders...

also, practice making use of KB....here is the link...you can ask any question...instead of waiting for my answers you can read from KB....

[url="https://kb.netapp.com/support/index?page=home"]https://kb.netapp.co...index?page=home[/url]

say for example...try this "how to delete a busy flexclone"....I got about 10's of hits....and I picked the best suited to my question....

[b]Symptoms[/b]

Snapshots cannot be deleted because they are busy. Snap list shows snapshots in a busy, vclone state.

filer> snap list vol2a

===== snap list vol2a ========
Aug 04 18:14 eloginfo__mail02_08-04-2008_17.45.00__daily
Aug 04 17:45 exchsnap__mail02_08-04-2008_17.45.00__daily
Aug 03 17:45 exchsnap__mail02_08-03-2008_17.45.00__daily
Jul 20 17:45 exchsnap__mail02_07-20-2008_17.45.00__daily (busy,vclone)
Jun 05 17:45 exchsnap__mail02_06-05-2008_17.45.00 (busy,vclone)
May 23 17:45 exchsnap__mail02_05-23-2008_17.45.00 (busy,vclone)

[b]Cause[/b]

The snapshots are in a busy, vclone state because they were the base snapshots used to create a FlexClone volume. The base snapshot of a parent volume cannot be deleted while a cloned volume exists. See Limitations of Volume Cloning.

In this example, the clones were created by SnapDrive as part of the backup verification process for SnapManager for Exchange. Normally these clones are automatically deleted upon completing the verification.

Internal Note
Due to Burt # 286623, effecting SnapDrive 5.0 when used with Data ONTAP 7.1.X, the volume clones might not get deleted properly.

[b]Solution[/b]

For volume clones not created by SnapDrive or containing LUNs:

Identify the volume clones by viewing the volume in question.

filer> vol status -v <volume_name>

vol2a online raid_dp, flex nosnap=off, nosnapdir=off,
minra=off,
no_atime_update=off,
nvfail=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=on,
maxdirsize=20971,
fs_size_fixed=off,
guarantee=volume,
svo_enable=off,
svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off,
fractional_reserve=100,
extent=off,
try_first=volume_grow
Volume has clones: SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_2,
SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_1, SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_0
Containing aggregate: 'aggr1'
Verify if the vol clones are needed.
If the clones are not needed:
Offline and destroy all of the vol clones listed.
filer> vol offline <clone_volume_name>
filer> vol destroy <clone_volume_name>
If you wish to keep the vol clones:
You can split the clone from the parent.
Note: Once the clone is split from the parent, it will result in two separate volumes occupying separate blocks in the
aggregate. Ensure you have adequate space to contain both volumes.
filer> vol clone split start <clone_volume_name>
Depending on the size of the volume and the load on the filer, the time this takes to complete might vary. You might check the progress of the split using:
filer> vol clone split status <clone_volume_name>
For volume clones created by SnapDrive or containing LUNs:

Identify the the volume clones by viewing the volume in question.
filer> vol status -v

vol2a online raid_dp, flex nosnap=off, nosnapdir=off,
minra=off,
no_atime_update=off,
nvfail=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=on,
maxdirsize=20971,
fs_size_fixed=off,
guarantee=volume,
svo_enable=off,
svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off,
fractional_reserve=100,
extent=off,
try_first=volume_grow
Volume has clones: SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_2,
SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_1, SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_0
Containing aggregate: 'aggr1'

Verify if these clones contain LUNs that are mounted and to which host they are mounted by:
filer> lun show -m

If the clones contain mounted LUNs, they should be disconnected using the host.
Under SnapDrive > Disks in the host's MMC, locate and disconnect the offending LUNs by right-clicking them and selecting disconnect.

Once disconnected, SnapDrive will destroy the FlexClone volume, clearing the busy, vclone snapshots.
A new feature of Data ONTAP 7.3 mitigates this by turning on the option vol options volume_name snapshot_clone_dependency.
> vol options volume_name snapshot_clone_dependency on

Link to comment
Share on other sites

[quote name='30 yrs industry' timestamp='1358914283' post='1303156436']
and I would strongly recommend you to go through the basics.....you can search "netapp back to basics" on google....say for example "netapp back to basics fractional reserve" will give all the basics of FR...read one topic a day...trust me...you can do wonders...

also, practice making use of KB....here is the link...you can ask any question...instead of waiting for my answers you can read from KB....

[url="https://kb.netapp.com/support/index?page=home"]https://kb.netapp.co...index?page=home[/url]

say for example...try this "how to delete a busy flexclone"....I got about 10's of hits....and I picked the best suited to my question....

[b]Symptoms[/b]

Snapshots cannot be deleted because they are busy. Snap list shows snapshots in a busy, vclone state.

filer> snap list vol2a

===== snap list vol2a ========
Aug 04 18:14 eloginfo__mail02_08-04-2008_17.45.00__daily
Aug 04 17:45 exchsnap__mail02_08-04-2008_17.45.00__daily
Aug 03 17:45 exchsnap__mail02_08-03-2008_17.45.00__daily
Jul 20 17:45 exchsnap__mail02_07-20-2008_17.45.00__daily (busy,vclone)
Jun 05 17:45 exchsnap__mail02_06-05-2008_17.45.00 (busy,vclone)
May 23 17:45 exchsnap__mail02_05-23-2008_17.45.00 (busy,vclone)

[b]Cause[/b]

The snapshots are in a busy, vclone state because they were the base snapshots used to create a FlexClone volume. The base snapshot of a parent volume cannot be deleted while a cloned volume exists. See Limitations of Volume Cloning.

In this example, the clones were created by SnapDrive as part of the backup verification process for SnapManager for Exchange. Normally these clones are automatically deleted upon completing the verification.

Internal Note
Due to Burt # 286623, effecting SnapDrive 5.0 when used with Data ONTAP 7.1.X, the volume clones might not get deleted properly.

[b]Solution[/b]

For volume clones not created by SnapDrive or containing LUNs:

Identify the volume clones by viewing the volume in question.

filer> vol status -v <volume_name>

vol2a online raid_dp, flex nosnap=off, nosnapdir=off,
minra=off,
no_atime_update=off,
nvfail=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=on,
maxdirsize=20971,
fs_size_fixed=off,
guarantee=volume,
svo_enable=off,
svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off,
fractional_reserve=100,
extent=off,
try_first=volume_grow
Volume has clones: SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_2,
SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_1, SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_0
Containing aggregate: 'aggr1'
Verify if the vol clones are needed.
If the clones are not needed:
Offline and destroy all of the vol clones listed.
filer> vol offline <clone_volume_name>
filer> vol destroy <clone_volume_name>
If you wish to keep the vol clones:
You can split the clone from the parent.
Note: Once the clone is split from the parent, it will result in two separate volumes occupying separate blocks in the
aggregate. Ensure you have adequate space to contain both volumes.
filer> vol clone split start <clone_volume_name>
Depending on the size of the volume and the load on the filer, the time this takes to complete might vary. You might check the progress of the split using:
filer> vol clone split status <clone_volume_name>
For volume clones created by SnapDrive or containing LUNs:

Identify the the volume clones by viewing the volume in question.
filer> vol status -v

vol2a online raid_dp, flex nosnap=off, nosnapdir=off,
minra=off,
no_atime_update=off,
nvfail=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=on,
maxdirsize=20971,
fs_size_fixed=off,
guarantee=volume,
svo_enable=off,
svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off,
fractional_reserve=100,
extent=off,
try_first=volume_grow
Volume has clones: SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_2,
SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_1, SnapDrive_vol2a_clone_of_exchsnap__mail02__recent_snapshot_0
Containing aggregate: 'aggr1'

Verify if these clones contain LUNs that are mounted and to which host they are mounted by:
filer> lun show -m

If the clones contain mounted LUNs, they should be disconnected using the host.
Under SnapDrive > Disks in the host's MMC, locate and disconnect the offending LUNs by right-clicking them and selecting disconnect.

Once disconnected, SnapDrive will destroy the FlexClone volume, clearing the busy, vclone snapshots.
A new feature of Data ONTAP 7.3 mitigates this by turning on the option vol options volume_name snapshot_clone_dependency.
> vol options volume_name snapshot_clone_dependency on
[/quote]

Thanks for taking your time to reply bhayya, much appreciated however really sorry to say this but I am not sure if you have understood the question .
My query really was about reason behind the difference in the outputs for df -h and aggr show_space command on a specific vol/aggr

I am not sure how snapshots would make any difference in the example that I provided, like wise flexclones have no effect on the numbers or shall I say difference between the numbers, anyways I think I learned the solution now so thought I would share it with other guys who might be interested.

Lun is space reserved, FR value is set to 100% however volume sizing is incorrect so it's safe to remove FR out of the equation, aggr shows actual block space that is being used across the disks for a specific vol and df includes all space reservation, FR anf all the other good stuff that could be provisioned with options.
Hope above helps
Below is the support community thread I've started and people from the technical community contributed in various ways and sort of agreed lun space reservation is the cause , feel free to have a look :)
https://forums.netapp.com/message/160397.

Link to comment
Share on other sites

[quote name='30 yrs industry' timestamp='1358914150' post='1303156407']

s o r r y....was really busy from past few days....

[color=#ff0000]space comes from aggregate...say if the space exceeds the size of aggregate....thats when we get an alert on DFM saying "over committed aggregate" and that is time we need to add disks[/color]....

if qtree is full what would you do? you will increase the quota...where is the space coming from....it is coming from volume...say if volume is getting full...what would you do? you'll either delete snapshots or increase the size of volume...where is the space coming from...it is coming from aggregate....if the aggregate is full...what would you do? you will add more disks....
[/quote]

yeah but I was asking about volume,
Volume with 25GB total capacity has lun of 15GB with space reservation and fractional reserve set to 100%
So in theory space reservation would block the blocks of 15GB for lun to use infuture , to allow overwrites on lun blocks fraction reserve would need another 15GB of space that would be bit of a problem here as total vol size is only 25GB which is less than 30GB +snapshot space that is required.
So setting FR to 100% is not the ideal way of doing things here, it shud've been set to 50% or something like that def not 100% as it ain't have enough space in the vol at the moment
other option would be to resize the volume
anyways bottom line, number difference was caused as df would show output calculating all the space reservations etc where as aggr command gives you output of the actual blocks space that's currently being utilized on the disks (without frac reserve)
cheers

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...