Kenshoo is looking for a big data engineer.
Kenshoo runs a few Hadoop and Cassandra clusters in our data center and on Amazon. We run various workloads on these clusters and use most of the contemporary technologies related to these software stacks.
You will join a team of engineers responsible for building and operating Kenshoo’s big data infrastructure. The team is responsible for building the tools and processes that allow us to efficiently operate and monitor this infrastructure and contribute to the design of new systems together with our R&D teams.
- At least 5 years experience in high-tech field (DBA / Programmer / IT), out of which 1 year at least experience in big data
- Good communication skills and teamwork
- Capable in taking ownership of initiatives and pushing them within the organization
- Very good Linux understanding
- Deep understanding of Hadoop eco-system and workflow
- Experience working with Hadoop eco-system
- Hands-on experience with administration,security, configuration management, monitoring, debugging, benchmarking and performance tuning of Hadoop eco-system
- Advantage: Experience with any of the following: Python, Java, Spark or Scala
- Advantage: Experience with NoSql technologies such as Cassandra, RabbitMQ, Kafka, HBase or Elastic Search
- Advantage: Background experience with MySql
- Responsible for various data stores, such as Hadoop, Spark, Cassandra
- Provide architectural solutions for complex data issues, involving large-scale solutions and rapid growth
- Building and maintaining high-performance, fault-tolerant, scalable distributed software systems
- Continuance improvement of Operation processes and procedures
- Work closely with our Data engineering group and other groups in our Operations and R&D.