Enterprise Data Lake – Big Data Engineer – NYC
|Global investment banking firm focused on serving clients for over 50 years, is a leader in providing insight, expertise and execution to investors, companies and governments. The firm provides a full range of investment banking, sales, trading, research and strategy across the spectrum of equities, fixed income, foreign exchange, futures and commodities, and select asset and wealth management strategies, in the Americas, Europe and Asia.|
CRM & Analytics Team Overview:
|The CRM & Analytics team is a highly strategic and cross-functional team responsible for leading the firm’s global digitalization effort. This initiative, spanning all client-facing business units and corporate functions, will drive innovation and strategic change through technology, data science, and deep analytics. The team partners with key business leaders and industry experts to build transformational technology to drive revenue, maximize efficiency, and optimize the allocation of resources. The CRM & Analytics team is at the forefront of company’s cloud initiative, leveraging best-in-class cloud-based technologies to replace legacy on-premises solutions to provide intelligent trend insights, actionable opportunities, decision support, and transparency into all client and business-related activities. This team is also responsible for Enterprise Data Lake.
Enterprise Data Lake (EDL) Team Responsibilities:
The EDL team will oversee and support architecture and implementation of EDL for all firm’s big data initiatives. It will drive the data governance and facilitate data onboarding. It will approve the design of data and software architecture, perform architecture review to pass EDL tollgates, evaluate and select cloud/AWS/Big Data tools for acceptance, and serve as a vendor liaison with data lake tool vendors and out internal infrastructure teams. Also, it will certify data for consumption, EDL patterns and processes, manage and govern data access controls, and will manage data lake and data governance training initiatives across enterprise.
The EDL team will become the center of excellence for the following EDL components and associated tools:
We are looking for an accomplished big data developer with strong experience in the cloud AWS data implementation to help us build and integrate data-driven intelligent cloud solutions for EDL. This role will involve a close collaboration with our team of passionate and innovative big data specialists, application developers and product managers.
This is a unique opportunity to be a member of our corporate CRM and Analytics Team, tackling our toughest and most exiting data lake challenges across multiple divisions in the firm.
- A minimum of 5 years of hands-on technical experience with:
- big data implementation and technology offerings
- AWS/cloud big data modeling & data management
- analytics and ingestion architecture of big data
- data lake management and data architecture
- data lake design patterns & cloud best enterprise practices
- IoT and streaming, real time processing
- Big data related AWS technologies
- Experience in AWS technologies such as Kinesis, Lambda, EC2, Redshift, RDS, Cloud formation, EMR, AWS S3, AWS Analytics, Spark, Databricks
- Experience with at least one of the following languages Scala, Python, R and or Java
- Experience with designing, developing, and implementing complex integration for end-to-end solutions at a middleware and app level with focus on performance optimization
- Strong implementation skill in area of cloud development in AWS
- Demonstrated ability in implementing cloud scalable, real time and high-performance data lake solutions (AWS)
- Ability to quickly perform proof-of-concepts for validating new technology or approach
- Ability to exercise independent judgment and creative problem-solving techniques in a highly complex environment using leading-edge technology and/or integrating with diverse application systems
- Ability to lead and drive technology change in a fast-paced, dynamic environment and all phases of the entire software life cycle
- Strong experience with data catalog, data governance, Collibra, MDM and/or Data Quality (IDQ) toolset
- Strong experience with integration of diverse data sources (batch and real time) in the cloud
- Lead the design and sustainment of data pipelines and data storage
- Expertise in Structured, unstructured, SQL and No-SQL technologies
- Expertise with identifying and understanding source data systems and mapping source system attributes to the target
- Experience with design and automation of ETL\ELT processes
- AWS and cloud performance tuning and optimization experience
- Experience with effort estimation for new projects/proposals on an ongoing basis.
- Excellent communication skills across all levels; ability to communicate with ease the complex and technical concepts.
- Ability to work effectively in a fast-paced environment
- Exposure to Big Data Technologies such as MapReduce, Hadoop or other Big Data Platforms
- Exposure to building and deploying data and analytics solutions on AWS or Microsoft Azure cloud platforms
- DevOps experience in cloud and big data
- Exposure to Cognitive computing, ML and AI
- Exposure to graph databases, SPARQL
- Exposure to search technologies like Lucene/Solr or Elastic Search
- Exposure to Ontology & taxonomies
- Exposure to Data Services, API, and OR mapping techniques
- Exposure to Financial Services Industry
- Experience with two or more vendors in any of the following areas including; Informatica, SQL Server, Oracle 11g, MySQL, SQL Data Warehouse Appliance, Oracle Exadata, Netezza, Greenplum, Vertica, Teradata, Aster Data, SAP HANA, Hadoop, SAS, SPSS, Spotfire, Tableau, Qlikview, R, Oracle Endeca, Oracle OBIEE, SAP Business Objects, SAS and other Analytics Vendors with BI components
|Job Category||Artificial Intelligence, Banking, Banking & Finance, Finance, Information Technology, Technology|