Mavensoft is seeking Applications for a HadoopBigdata Engineer from qualified, interested, and available candidates for 8 months contract assignment with a leading company in Portland, OR. If you are interested and meet the below requirements, please email, or apply online for immediate considerations with your salary expectations. NOTE*Corp to Corp C2C candidates *will not* be considered. Candidates MUST have 5+ years’ experience as BigData Engineer in a real-world Business Operation environment with passion for problem solving and customer service. The ideal candidate will have search, recommendation, retail, commerce behavior and or digital experience and must be proactive self-starter with ability to work independently.
Job Title: Hadoop Bigdata Engineer – Java/Scala, Python, SQL Duration: 8 months contract Location: Beaverton, OR Remote Work: Fully remote is fine but the candidate must have significant time overlap with the team (6+ hours), which is Pacific Standard Time. Expected to be OnSite client office site opens post COVID-19 shut down
Key Skills: Programming with Java/Scala, Python, Shell scripting, and SQL Desired Skills: DevOps and NOSQL: HBase, DynamoDB, Cassandra, Apache Spark, Flink, Kafka Bonus Skills: Development in Spark, MapReduce, Hive and strong with RESTful API’s
Job Description: Our client is embracing Big Data technologies to enable data-driven decisions and expanding their Big Data Engineering team to keep pace. As a Sr. Big Data Engineer, you will work with a variety of talented teammates and be a driving force in technical initiatives that will accelerate analytics. You will be working on projects that build data artifacts to answer questions about consumer behavior, commerce trends, consumer touchpoint preferences and more!
Job Responsibilities: Design and implement distributed data processing pipelines using Spark, Hive, Python, and other tools and languages prevalent in the Hadoop ecosystem. You will be given the opportunity to own the design and implementation. You will collaborate with Product managers, Data Scientists, Engineering folks to accomplish your tasks. Publish RESTful API’s to enable real-time data consumption using OpenAPI specifications. This will enable many teams to consume the data that is being produced. Explore and build proof of concepts using open source NOSQL technologies such as HBase, DynamoDB, Cassandra and Distributed Stream Processing frameworks like ApacheSpark, Flink, Kafka stream. Take part in DevOps by building utilities, user defined functions and frameworks to better enable data flow patterns. Work with architecture/engineering leads and other teammates to ensure high quality solutions through code reviews, engineering best practices documentation. Experience in Business Rule management systems like Drools will also come in handy.
Education and Qualifications Required: Degrees required for this job: MS/BS degree in a computer science or related discipline
Experience required for this job: 3+ years’ experience in large-scale software development/Big Data technologies; Programming skills in Java/Scala, Python, Shell scripting, and SQL Development skills around Spark, MapReduce, and Hive plus strong skills around developing RESTful API’s