Spartan Posted December 5, 2013 Report Posted December 5, 2013 Oracle's Big data team is looking for a diverse Hadoop Operations candidate with a broad range of system engineering skills. Candidates should display working knowledge of Hadoop Ecosystem. With the recent introduction of Oracle Big Data Appliance and Oracle Big Data Connectors, Oracle is the first vendor to offer a complete and integrated solution to address the full spectrum of enterprise big data requirements. Oracle’s big data strategy is centered on the idea that you can evolve your current enterprise data architecture to incorporate big data and deliver business value. By evolving your current enterprise architecture, you can leverage the proven reliability, flexibility and performance of your Oracle systems to address your big data requirements. Oracle Big Data team is working toward providing end-to-end solution, where Big data is distilled and analyzed in combination with traditional enterprise data, enterprises can develop a more thorough and insightful understanding of their business, which can lead to enhanced productivity, a stronger competitive position and greater innovation – all of which can have a significant impact on the bottom line. Empower the business with Self-Service discovery while maintaining IT stewardship to ensure actionable decisions the business can trust. Unlock the insights embedded in qualitative, unstructured data, both within and external to the enterprise including customer service verbatim, social media, external websites and more Easily explore the data relevant to your business without having to invest in complicated modeling. Bring those findings back to the analytics driving your organization to achieve a complete view of the factors affecting your business. Candidates should be able to trouble shoot various type of Network, System and Application related issues. Our team provides infrastructure and support for large Internet facing applications by building and maintaining hadoop clusters and application stack. We work in a high interrupt environment with our end users and have very aggressive project delivery timelines. Great communication skills with Project Managers and engineers is an absolute must. Education - Technical BS/MS degreeAdditional Requirements - 5+ years in UNIX system administration - Hands-on knowledge of Hive, HBase, PIG - Excellent communication skills both written and verbal - Excellent trouble shooting. - Excellent documentation habits - Strong Scripting in shell, Python and Perl - Good working knowledge of various server hardware and lights out management - Exposure to Oozie, Impala - Exposure to Agile Scrum - Exposure to networking & load balancing solutions - Exposure to configuration management with Puppet (preferred) - Exposure to revision control systems with branching and tagging (SVN,Git) This is a notification that Oracle performs background checks on all candidates at the time of offer. Verification includes dates of employment, educational degree, and criminal record. Any incorrect data provided may result in making you ineligible for employment at Oracle. Please ensure in advance that all information listed on your resume is accurate. *********************************************************************************************************************************************** If you are interested in this opportunity please send me an invite along with your resume directly to [email protected] with the subject line "Principal Hadoop Infrastructure Engineer - Direct Hire with Oracle - Redwood City, CA" for immediate consideration. *********************************************************************************************************************************************** No agencies please. Desired Skills and Experience · Hadoop knowledgeable · Security conscious (LDAP, lockdown hosts, PCI, SOX) · Performance tuning and capacity planning · Network Administration (LACP,VPC, bonding) · Linux/UNIX administration (RHEL, CentOS) · Creating RPMs from source and automating RPM deployments · NFS and NAS appliances
Spartan Posted December 5, 2013 Author Report Posted December 5, 2013 Oracle client aa.... client kad vay...full time position oracle lo..
Yuva Nataratna Posted December 5, 2013 Report Posted December 5, 2013 client kad vay...full time position oracle lo.. vammo.....
k2s Posted December 5, 2013 Report Posted December 5, 2013 Oracle's Big data team is looking for a diverse Hadoop Operations candidate with a broad range of system engineering skills. Candidates should display working knowledge of Hadoop Ecosystem. With the recent introduction of Oracle Big Data Appliance and Oracle Big Data Connectors, Oracle is the first vendor to offer a complete and integrated solution to address the full spectrum of enterprise big data requirements. Oracle’s big data strategy is centered on the idea that you can evolve your current enterprise data architecture to incorporate big data and deliver business value. By evolving your current enterprise architecture, you can leverage the proven reliability, flexibility and performance of your Oracle systems to address your big data requirements. Oracle Big Data team is working toward providing end-to-end solution, where Big data is distilled and analyzed in combination with traditional enterprise data, enterprises can develop a more thorough and insightful understanding of their business, which can lead to enhanced productivity, a stronger competitive position and greater innovation – all of which can have a significant impact on the bottom line. Empower the business with Self-Service discovery while maintaining IT stewardship to ensure actionable decisions the business can trust. Unlock the insights embedded in qualitative, unstructured data, both within and external to the enterprise including customer service verbatim, social media, external websites and more Easily explore the data relevant to your business without having to invest in complicated modeling. Bring those findings back to the analytics driving your organization to achieve a complete view of the factors affecting your business. Candidates should be able to trouble shoot various type of Network, System and Application related issues. Our team provides infrastructure and support for large Internet facing applications by building and maintaining hadoop clusters and application stack. We work in a high interrupt environment with our end users and have very aggressive project delivery timelines. Great communication skills with Project Managers and engineers is an absolute must. Education - Technical BS/MS degreeAdditional Requirements - 5+ years in UNIX system administration - Hands-on knowledge of Hive, HBase, PIG - Excellent communication skills both written and verbal - Excellent trouble shooting. - Excellent documentation habits - Strong Scripting in shell, Python and Perl - Good working knowledge of various server hardware and lights out management - Exposure to Oozie, Impala - Exposure to Agile Scrum - Exposure to networking & load balancing solutions - Exposure to configuration management with Puppet (preferred) - Exposure to revision control systems with branching and tagging (SVN,Git) This is a notification that Oracle performs background checks on all candidates at the time of offer. Verification includes dates of employment, educational degree, and criminal record. Any incorrect data provided may result in making you ineligible for employment at Oracle. Please ensure in advance that all information listed on your resume is accurate. *********************************************************************************************************************************************** If you are interested in this opportunity please send me an invite along with your resume directly to [email protected] with the subject line "Principal Hadoop Infrastructure Engineer - Direct Hire with Oracle - Redwood City, CA" for immediate consideration. *********************************************************************************************************************************************** No agencies please. Desired Skills and Experience · Hadoop knowledgeable · Security conscious (LDAP, lockdown hosts, PCI, SOX) · Performance tuning and capacity planning · Network Administration (LACP,VPC, bonding) · Linux/UNIX administration (RHEL, CentOS) · Creating RPMs from source and automating RPM deployments · NFS and NAS appliances principal antey mari kastam man..
vikuba Posted December 5, 2013 Report Posted December 5, 2013 principal antey mari kastam man.. aaya post undhi anta..cheyi
Maximus Posted December 5, 2013 Report Posted December 5, 2013 aaya post undhi anta..cheyi uncle ki 3 yrs experience aa post lo..
Recommended Posts