Jump to content

Calling Acp Neeku Rac Interview Pedutunna..tondaraga Raa!


vijay2349

Recommended Posts

[quote name='PMR aka OM' timestamp='1324404885' post='1301171112']

ALa emi kaadhu baa,, maa Danilo , existing DB ni replicating ona newly built DEV servers akkkada unna sixxe disks ni kothaa servers loki kaavali antaaru ,

maa Storage vallu emo no way memu ala ivvatledhu ippdu, wehad one standardised size now a days.. so u need to take thisbig disk and partition on ur end ani edho cheptunnaru , naaku emo sagam sagam telusu ..

edho aa meetings lo coer up chesi manage chestunan now a days..
[/quote]

agreed .....maa meetinglo kuda edo edo partitionlu antu untaru..... nee kakkurthi kakapotey kotha disklo esukovachu kada anukunta.....

Link to comment
Share on other sites

  • Replies 159
  • Created
  • Last Reply

Top Posters In This Topic

  • PMREDDY19

    44

  • k2s

    36

  • Gachibowli_Diwakar

    33

  • littlestar

    13

Top Posters In This Topic

[quote name='RUDRAKSHA' timestamp='1324404574' post='1301171097']

so here what is the role of voting disk .if nodes themselves can evict other dead node whats the use of voting disk.
[/quote]

akkada control file antunnaaru kadha .. may be that controlfile is related to voting disk .. or the controlfile they are referring to is db's .. if it is db's .. then Voting disk doesnot play any role .. kani i strongly feel that, node eviction ayetapudu it will get or update node membership information in voting disk ..

so, control file ante db control file ye avvochu

Link to comment
Share on other sites

[quote name='k2s' timestamp='1324403545' post='1301171020']

adey bot crawling...
[/quote]

BOT crawling naa????


Ilantidhenaa??


[color=#000000][font=sans-serif][size=3]
A [b]Web crawler[/b] is a computer program that browses the [url="http://en.wikipedia.org/wiki/World_Wide_Web"]World Wide Web[/url] in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are [i]ants[/i], [i]automatic indexers[/i], [i]bots[/i],[sup][url="http://en.wikipedia.org/wiki/Web_crawler#cite_note-0"][1][/url][/sup] [i]Web spiders[/i],[sup][url="http://en.wikipedia.org/wiki/Web_crawler#cite_note-spekta-1"][2][/url][/sup] [i]Web robots[/i],[sup][url="http://en.wikipedia.org/wiki/Web_crawler#cite_note-spekta-1"][2][/url][/sup] or—especially in the [url="http://en.wikipedia.org/wiki/FOAF_(software)"]FOAF[/url] community—[i]Web scutters[/i].[sup][url="http://en.wikipedia.org/wiki/Web_crawler#cite_note-2"][3][/url][/sup][/size][/font][/color]
[color=#000000][font=sans-serif][size=3]
This process is called [i]Web crawling[/i] or [i]spidering[/i]. Many sites, in particular [url="http://en.wikipedia.org/wiki/Web_search_engine"]search engines[/url], use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will [url="http://en.wikipedia.org/wiki/Index_(search_engine)"]index[/url] the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating [url="http://en.wikipedia.org/wiki/HTML"]HTML[/url] code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending [url="http://en.wikipedia.org/wiki/Spamming"]spam[/url]).[/size][/font][/color]
[color=#000000][font=sans-serif][size=3]
A Web crawler is one type of [url="http://en.wikipedia.org/wiki/Internet_bot"]bot[/url], or software agent. In general, it starts with a list of [url="http://en.wikipedia.org/wiki/Uniform_Resource_Locator"]URLs[/url] to visit, called the [i]seeds[/i]. As the crawler visits these URLs, it identifies all the [url="http://en.wikipedia.org/wiki/Hyperlink"]hyperlinks[/url] in the page and adds them to the list of URLs to visit, called the [i]crawl frontier[/i]. URLs from the frontier are recursively visited according to a set of policies.[/size][/font][/color]
[color=#000000][font=sans-serif][size=3]
The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.[/size][/font][/color]


[url="http://support.google.com/webmasters/bin/answer.py?hl=en&answer=182072"]http://support.google.com/webmasters/bin/answer.py?hl=en&answer=182072[/url]

[font=Arial, Helvetica, sans-serif][size=3]

[b] Googlebot[/b]

[/size][/font]
[font=Arial, Helvetica, sans-serif][size=3]

Googlebot is Google's web crawling bot (sometimes also called a "spider"). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
We use a huge set of computers to fetch (or "crawl") billions of pages on the web. Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.
Googlebot's crawl process begins with a list of webpage URLs, generated from previous crawl processes and augmented with [url="http://support.google.com/webmasters/bin/answer.py?answer=156184"]Sitemap[/url] data provided by webmasters. As Googlebot visits each of these websites it detects links (SRC and HREF) on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.
[b] For webmasters: Googlebot and your site[/b]

[b] How Googlebot accesses your site[/b]

For most sites, Googlebot shouldn't access your site more than once every few seconds on average. However, due to network delays, it's possible that the rate will appear to be slightly higher over short periods. In general, Googlebot should download only one copy of each page at a time. If you see that Googlebot is downloading a page multiple times, it's probably because the crawler was stopped and restarted.
Googlebot was designed to be distributed on several machines to improve performance and scale as the web grows. Also, to cut down on bandwidth usage, we run many crawlers on machines located near the sites they're indexing in the network. Therefore, your logs may show visits from several machines at google.com, all with the user-agent Googlebot. Our goal is to crawl as many pages from your site as we can on each visit without overwhelming your server's bandwidth. [url="http://support.google.com/webmasters/bin/answer.py?answer=48620"]Request a change in the crawl rate.[/url]
[b] Blocking Googlebot from content on your site[/b]

It's almost impossible to keep a web server secret by not publishing links to it. As soon as someone follows a link from your "secret" server to another web server, your "secret" URL may appear in the referrer tag and can be stored and published by the other web server in its referrer log. Similarly, the web has many outdated and broken links. Whenever someone publishes an incorrect link to your site or fails to update links to reflect changes in your server, Googlebot will try to download an incorrect link from your site.
If you want to prevent Googlebot from crawling content on your site, you have a [url="http://support.google.com/webmasters/bin/answer.py?answer=93708"]number of options[/url], including using[url="http://support.google.com/webmasters/bin/answer.py?answer=156449"]robots.txt[/url] to block access to files and directories on your server.
Once you've created your robots.txt file, there may be a small delay before Googlebot discovers your changes. If Googlebot is still crawling content you've blocked in robots.txt, check that the robots.txt is in the correct location. It must be in the top directory of the server (e.g., www.myhost.com/robots.txt); placing the file in a subdirectory won't have any effect.
If you just want to prevent the "file not found" error messages in your web server log, you can create an empty file named robots.txt. If you want to prevent Googlebot from following any links on a page of your site, you can use the[url="http://support.google.com/webmasters/bin/answer.py?answer=96569"]nofollow meta tag[/url]. To prevent Googlebot from following an individual link, add the rel="nofollow" attribute to the link itself.
Here are some additional tips:[list]
[*]Test that your robots.txt is working as expected. The [url="http://support.google.com/webmasters/bin/answer.py?answer=156449&expand=test1"]Test robots.txt tool[/url] in [url="http://www.google.com/webmasters/tools"]Webmaster Tools[/url] lets you see exactly how Googlebot will interpret the contents of your robots.txt file. The Google user-agent is (appropriately enough) Googlebot.
[*]The [url="http://support.google.com/webmasters/bin/answer.py?answer=158587"]Fetch as Googlebot tool[/url] in Webmaster Tools helps you understand exactly how your site appears to Googlebot. This can be very useful when troubleshooting problems with your site's content or discoverability in search results.
[/list]
[b] Making sure your site is crawlable[/b]

Googlebot discovers sites by following links from page to page. The [url="http://support.google.com/webmasters/bin/answer.py?answer=35120"]Crawl errors[/url] page in Webmaster Tools lists any problems Googlebot found when crawling your site. We recommend reviewing these crawl errors regularly to identify any problems with your site.
If you're running an AJAX application with content that you'd like to appear in search results, we recommend reviewing our proposal on making [url="http://support.google.com/webmasters/bin/answer.py?answer=174992"]AJAX-based content crawlable and indexable[/url].
If your robots.txt file is working as expected, but your site isn't getting traffic, here are some [url="http://support.google.com/webmasters/bin/answer.py?answer=34444"]possible reasons why your content is not performing well in search[/url].
[b] Problems with spammers and other user-agents[/b]

The IP addresses used by Googlebot change from time to time. The best way to identify accesses by Googlebot is to use the user-agent (Googlebot). You can [url="http://support.google.com/webmasters/bin/answer.py?answer=80553"]verify that a bot accessing your server really is Googlebot[/url] by using a reverse DNS lookup.
Googlebot and all respectable search engine bots will respect the directives in robots.txt, but some nogoodniks and spammers do not. [url="http://support.google.com/webmasters/bin/answer.py?answer=93713"]Report spam to Google.[/url]
Google has several other user-agents, including Feedfetcher (user-agent Feedfetcher-Google). Since Feedfetcher requests come from explicit action by human users who have added the feeds to their [url="http://www.google.com/ig"]Google home page[/url] or to [url="http://www.google.com/reader"]Google Reader[/url], and not from automated crawlers, Feedfetcher does not follow robots.txt guidelines. You can prevent Feedfetcher from crawling your site by configuring your server to serve a 404, 410, or other error status message to user-agent Feedfetcher-Google. [url="http://support.google.com/webmasters/bin/answer.py?answer=178852"]More information about Feedfetcher.[/url][/size][/font]

Link to comment
Share on other sites

[quote name='PMR aka OM' timestamp='1324405133' post='1301171119']

BOT crawling naa????


Ilantidhenaa??



A [b]Web crawler[/b] is a computer program that browses the [url="http://en.wikipedia.org/wiki/World_Wide_Web"]World Wide Web[/url] in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are [i]ants[/i], [i]automatic indexers[/i], [i]bots[/i],[sup][url="http://en.wikipedia.org/wiki/Web_crawler#cite_note-0"][1][/url][/sup] [i]Web spiders[/i],[sup][url="http://en.wikipedia.org/wiki/Web_crawler#cite_note-spekta-1"][2][/url][/sup] [i]Web robots[/i],[sup][url="http://en.wikipedia.org/wiki/Web_crawler#cite_note-spekta-1"][2][/url][/sup] or—especially in the [url="http://en.wikipedia.org/wiki/FOAF_(software)"]FOAF[/url] community—[i]Web scutters[/i].[sup][url="http://en.wikipedia.org/wiki/Web_crawler#cite_note-2"][3][/url][/sup]

This process is called [i]Web crawling[/i] or [i]spidering[/i]. Many sites, in particular [url="http://en.wikipedia.org/wiki/Web_search_engine"]search engines[/url], use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will [url="http://en.wikipedia.org/wiki/Index_(search_engine)"]index[/url] the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating [url="http://en.wikipedia.org/wiki/HTML"]HTML[/url] code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending [url="http://en.wikipedia.org/wiki/Spamming"]spam[/url]).

A Web crawler is one type of [url="http://en.wikipedia.org/wiki/Internet_bot"]bot[/url], or software agent. In general, it starts with a list of [url="http://en.wikipedia.org/wiki/Uniform_Resource_Locator"]URLs[/url] to visit, called the [i]seeds[/i]. As the crawler visits these URLs, it identifies all the [url="http://en.wikipedia.org/wiki/Hyperlink"]hyperlinks[/url] in the page and adds them to the list of URLs to visit, called the [i]crawl frontier[/i]. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.


[url="http://support.google.com/webmasters/bin/answer.py?hl=en&answer=182072"]http://support.googl...n&answer=182072[/url]



[b] Googlebot[/b]




Googlebot is Google's web crawling bot (sometimes also called a "spider"). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
We use a huge set of computers to fetch (or "crawl") billions of pages on the web. Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.
Googlebot's crawl process begins with a list of webpage URLs, generated from previous crawl processes and augmented with [url="http://support.google.com/webmasters/bin/answer.py?answer=156184"]Sitemap[/url] data provided by webmasters. As Googlebot visits each of these websites it detects links (SRC and HREF) on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.
[b] For webmasters: Googlebot and your site[/b]

[b] How Googlebot accesses your site[/b]

For most sites, Googlebot shouldn't access your site more than once every few seconds on average. However, due to network delays, it's possible that the rate will appear to be slightly higher over short periods. In general, Googlebot should download only one copy of each page at a time. If you see that Googlebot is downloading a page multiple times, it's probably because the crawler was stopped and restarted.
Googlebot was designed to be distributed on several machines to improve performance and scale as the web grows. Also, to cut down on bandwidth usage, we run many crawlers on machines located near the sites they're indexing in the network. Therefore, your logs may show visits from several machines at google.com, all with the user-agent Googlebot. Our goal is to crawl as many pages from your site as we can on each visit without overwhelming your server's bandwidth. [url="http://support.google.com/webmasters/bin/answer.py?answer=48620"]Request a change in the crawl rate.[/url]
[b] Blocking Googlebot from content on your site[/b]

It's almost impossible to keep a web server secret by not publishing links to it. As soon as someone follows a link from your "secret" server to another web server, your "secret" URL may appear in the referrer tag and can be stored and published by the other web server in its referrer log. Similarly, the web has many outdated and broken links. Whenever someone publishes an incorrect link to your site or fails to update links to reflect changes in your server, Googlebot will try to download an incorrect link from your site.
If you want to prevent Googlebot from crawling content on your site, you have a [url="http://support.google.com/webmasters/bin/answer.py?answer=93708"]number of options[/url], including using[url="http://support.google.com/webmasters/bin/answer.py?answer=156449"]robots.txt[/url] to block access to files and directories on your server.
Once you've created your robots.txt file, there may be a small delay before Googlebot discovers your changes. If Googlebot is still crawling content you've blocked in robots.txt, check that the robots.txt is in the correct location. It must be in the top directory of the server (e.g., www.myhost.com/robots.txt); placing the file in a subdirectory won't have any effect.
If you just want to prevent the "file not found" error messages in your web server log, you can create an empty file named robots.txt. If you want to prevent Googlebot from following any links on a page of your site, you can use the[url="http://support.google.com/webmasters/bin/answer.py?answer=96569"]nofollow meta tag[/url]. To prevent Googlebot from following an individual link, add the rel="nofollow" attribute to the link itself.
Here are some additional tips:[list]
[*]Test that your robots.txt is working as expected. The [url="http://support.google.com/webmasters/bin/answer.py?answer=156449&expand=test1"]Test robots.txt tool[/url] in [url="http://www.google.com/webmasters/tools"]Webmaster Tools[/url] lets you see exactly how Googlebot will interpret the contents of your robots.txt file. The Google user-agent is (appropriately enough) Googlebot.
[*]The [url="http://support.google.com/webmasters/bin/answer.py?answer=158587"]Fetch as Googlebot tool[/url] in Webmaster Tools helps you understand exactly how your site appears to Googlebot. This can be very useful when troubleshooting problems with your site's content or discoverability in search results.
[/list]
[b] Making sure your site is crawlable[/b]


Googlebot discovers sites by following links from page to page. The [url="http://support.google.com/webmasters/bin/answer.py?answer=35120"]Crawl errors[/url] page in Webmaster Tools lists any problems Googlebot found when crawling your site. We recommend reviewing these crawl errors regularly to identify any problems with your site.
If you're running an AJAX application with content that you'd like to appear in search results, we recommend reviewing our proposal on making [url="http://support.google.com/webmasters/bin/answer.py?answer=174992"]AJAX-based content crawlable and indexable[/url].
If your robots.txt file is working as expected, but your site isn't getting traffic, here are some [url="http://support.google.com/webmasters/bin/answer.py?answer=34444"]possible reasons why your content is not performing well in search[/url].
[b] Problems with spammers and other user-agents[/b]

The IP addresses used by Googlebot change from time to time. The best way to identify accesses by Googlebot is to use the user-agent (Googlebot). You can [url="http://support.google.com/webmasters/bin/answer.py?answer=80553"]verify that a bot accessing your server really is Googlebot[/url] by using a reverse DNS lookup.
Googlebot and all respectable search engine bots will respect the directives in robots.txt, but some nogoodniks and spammers do not. [url="http://support.google.com/webmasters/bin/answer.py?answer=93713"]Report spam to Google.[/url]
Google has several other user-agents, including Feedfetcher (user-agent Feedfetcher-Google). Since Feedfetcher requests come from explicit action by human users who have added the feeds to their [url="http://www.google.com/ig"]Google home page[/url] or to [url="http://www.google.com/reader"]Google Reader[/url], and not from automated crawlers, Feedfetcher does not follow robots.txt guidelines. You can prevent Feedfetcher from crawling your site by configuring your server to serve a 404, 410, or other error status message to user-agent Feedfetcher-Google. [url="http://support.google.com/webmasters/bin/answer.py?answer=178852"]More information about Feedfetcher.[/url]
[/quote]

malli calling MODs... reddy malli maa networking basha lo tidithundhu ....


just kidding.... yeah... adey

Link to comment
Share on other sites

[quote name='k2s' timestamp='1324405124' post='1301171117']

agreed .....maa meetinglo kuda edo edo partitionlu antu untaru..... nee kakkurthi kakapotey kotha disklo esukovachu kada anukunta.....
[/quote]

Storage motham EMC vaadi asthi baa, nelaki charge chestaadu.. bechallu leputaadu vaadu , oka server kaani 3 yrs run ayyindanuko.. they will end 3 times more money in storage than the actual server cost..

so SAN/NAS lo EMC vaadu maa vodiki stars sooopistaadu..

Link to comment
Share on other sites

[quote name='PMR aka OM' timestamp='1324405243' post='1301171122']

Storage motham EMC vaadi asthi baa, nelaki charge chestaadu.. bechallu leputaadu vaadu , oka server kaani 3 yrs run ayyindanuko.. they will end 3 times more money in storage than the actual server cost..

so SAN/NAS lo EMC vaadu maa vodiki stars sooopistaadu..
[/quote]

maa vallu 2 pedda alamarah laanti EMC rack lu lo server konnaru... boledu money petti konnaru ani annaru.....

Link to comment
Share on other sites

[quote name='k2s' timestamp='1324405224' post='1301171121']

malli calling MODs... reddy malli maa networking basha lo tidithundhu ....


just kidding.... yeah... adey
[/quote]

kani e story lo collecting info from the sites annadu kada.

maa daggara unnecesaary traffic ni code tho push chestaru annnatugaa gurthu like..

some codewill flush like some 500 users from different pages on the same page..

ala u will put the maximum load the server can take so that the Server end up in serving the wrong users while the actual end users will face some issues..


daani kosam memu edho oka module include chesaam apache lo mallli..

Link to comment
Share on other sites

[quote name='Alexander' timestamp='1324405365' post='1301171128']
calling MODS plz move this thread to learning/training section... plz dont spam discussion section
[/quote]

not agreed ....this is also a sfamming thadu

:3D_Smiles:

Link to comment
Share on other sites

[quote name='Alexander' timestamp='1324405365' post='1301171128']
calling MODS plz move this thread to learning/training section... plz dont spam discussion section
[/quote]

(*,):? (*,):? (*,):? (*,):?

Link to comment
Share on other sites

[quote name='k2s' timestamp='1324405414' post='1301171130']

not agreed ....this is also a sfamming thadu

:3D_Smiles:
[/quote]

nenu cheppindhi kuda adhey plz dont spam discussion section...

Link to comment
Share on other sites

[quote name='PMR aka OM' timestamp='1324405382' post='1301171129']

kani e story lo collecting info from the sites annadu kada.

maa daggara unnecesaary traffic ni code tho push chestaru annnatugaa gurthu like..

some codewill flush like some 500 users from different pages on the same page..

ala u will put the maximum load the server can take so that the Server end up in serving the wrong users while the actual end users will face some issues..


daani kosam memu edho oka module include chesaam apache lo mallli..
[/quote]
adi antha meeru sooskovali....maa load balancer pani...aa request ekkada nunchi vachindi.... adi genuine user aa.... bot aa ,...... ??? aney soothamu..... oka visitor ID ichi.... inko poo ani.... meeku famfuthamu...... aa okka request can take down ur whole DB....

aa requests serving thresholds meeru fettukovali....

Link to comment
Share on other sites

[quote name='Alexander' timestamp='1324405365' post='1301171128']
calling MODS plz move this thread to learning/training section... plz dont spam discussion section
[/quote]

[img]http://i872.photobucket.com/albums/ab288/sajja01/cry-o.gif[/img] pillalu budhiga sadhuvkuntuu unte .. vuncle coming and disturbing us

Link to comment
Share on other sites

[quote name='k2s' timestamp='1324405326' post='1301171126']

maa vallu 2 pedda alamarah laanti EMC rack lu lo server konnaru... boledu money petti konnaru ani annaru.....
[/quote]

EMC RAC konesaara.. maaku storage acquire chesukunna kaani, EMC team maintain chestadi motham anni serverski Network lo NAS ni , inka disk meeda vachey SAN assignement ni antha..

Akakda SAN ki edho LUNs ani vinnnu,,


[color=#000000][font=sans-serif][size=3]In [/size][/font][/color][url="http://en.wikipedia.org/wiki/Computer_storage"]computer storage[/url][color=#000000][font=sans-serif][size=3], a [/size][/font][/color][b]logical unit number[/b][color=#000000][font=sans-serif][size=3] or [/size][/font][/color][b]LUN[/b][color=#000000][font=sans-serif][size=3] is a number used to identify a [/size][/font][/color][b]logical unit??[/b]
[b]aa sizes fixed chesi , evadiki entha kaavali anukunna, [/b][b]multiples of this fixed LUN loney request pettali antunnaru now a days..[/b]

Link to comment
Share on other sites

[quote name='k2s' timestamp='1324405124' post='1301171117']

agreed .....maa meetinglo kuda edo edo partitionlu antu untaru..... nee kakkurthi kakapotey kotha disklo esukovachu kada anukunta.....
[/quote]

storage ki anavasaram ga kharchu chesthaaru anta ba .. pedha companies lo .. because market lo cheap ga dhorikina kuda .. same storage ni preferred vendors nunchi ekkuva ki kontuu untaru anta .. pedha scams nadusthuu untaay anta

andhu k additional storage kaavaalante.. budget approve avvadhu .. chaala cost authundi ani

Link to comment
Share on other sites

×
×
  • Create New...