IBM Job Vacancy Alert for Big Data Engineer

0

 

IBM Job Vacancy Alert for Big Data Engineer

IBM Job Vacancy Alert for Big Data Engineer

Introduction

In this role, you’ll work in our IBM Client Innovation Center (CIC), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world.​ These centers offer our clients locally-based skills and technical expertise to drive innovation and adoption of new technology.

A career in IBM Consulting is rooted in long-term relationships and close collaboration with clients across the globe.

You’ll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.

Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you’ll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.

Your Role and Responsibilities

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the client's needs.

Your primary responsibilities include:

Design, build, optimize and support new and existing data models and ETL processes based on our client's business requirements.

Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data-driven organization.

Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need to.

Required Technical and Professional Expertise

Developed the Pysprk code for AWS Glue jobs and for EMR.

Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.

Developed Python and Pyspark programs for data analysis. Good working experience with Python to develop a Custom Framework for generating rules (just like a rules engine).

Developed Hadoop streaming Jobs using Python for integrating Python API-supported applications..

Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark.

Preferred Technical and Professional Expertise

Apache Spark DataFrames/RDDs were used to apply business transformations and utilized Hive Context objects to perform read/write operations..

Rewrite some Hive queries to Spark SQL to reduce the overall batch time.

CLICK HERE TO APPLY ====> APPLY LINK

Post a Comment

0Comments
Post a Comment (0)
To Top